text
stringlengths 13
1.77M
| id
stringlengths 22
127
| metadata
dict | __index_level_0__
int64 0
28
|
---|---|---|---|
# Run prompt flow in Azure AI
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../../how-to-guides/faq.md#stable-vs-experimental).
:::
Assuming you have learned how to create and run a flow following [Quick start](../../../how-to-guides/quick-start.md). This guide will walk you through the main process of how to submit a promptflow run to [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
Benefits of use Azure AI comparison to just run locally:
- **Designed for team collaboration**: Portal UI is a better fix for sharing & presentation your flow and runs. And workspace can better organize team shared resources like connections.
- **Enterprise Readiness Solutions**: prompt flow leverages Azure AI's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
## Prerequisites
1. An Azure account with an active subscription - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
2. An Azure AI ML workspace - [Create workspace resources you need to get started with Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources).
3. A python environment, `python=3.9` or higher version like 3.10 is recommended.
4. Install `promptflow` with extra dependencies and `promptflow-tools`.
```sh
pip install promptflow[azure] promptflow-tools
```
5. Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows).
```sh
git clone https://github.com/microsoft/promptflow.git
```
## Create necessary connections
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
In this guide, we will use flow `web-classification` which uses connection `open_ai_connection` inside, we need to set up the connection if we haven't added it before.
Please go to workspace portal, click `Prompt flow` -> `Connections` -> `Create`, then follow the instruction to create your own connections. Learn more on [connections](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2).
## Submit a run to workspace
Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard/`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use `az login` to login so promptflow can get your credential.
```sh
az login
```
Submit a run to workspace.
```sh
pfazure run create --subscription <my_sub> -g <my_resource_group> -w <my_workspace> --flow web-classification --data web-classification/data.jsonl --stream
```
**Default subscription/resource-group/workspace**
Note `--subscription`, `-g` and `-w` can be omitted if you have installed the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and [set the default configurations](https://learn.microsoft.com/en-us/cli/azure/azure-cli-configuration).
```sh
az account set --subscription <my-sub>
az configure --defaults group=<my_resource_group> workspace=<my_workspace>
```
**Serverless runtime and named runtime**
Runtimes serve as computing resources so that the flow can be executed in workspace. Above command does not specify any runtime which means it will run in serverless mode. In this mode the workspace will automatically create a runtime and you can use it as the default runtime for any flow run later.
Instead, you can also [create a runtime](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-create-manage-runtime?view=azureml-api-2) and use it with `--runtime <my-runtime>`:
```sh
pfazure run create --flow web-classification --data web-classification/data.jsonl --stream --runtime <my-runtime>
```
**Specify run name and view a run**
You can also name the run by specifying `--name my_first_cloud_run` in the run create command, otherwise the run name will be generated in a certain pattern which has timestamp inside.
With a run name, you can easily stream or view the run details using below commands:
```sh
pfazure run stream -n my_first_cloud_run # same as "--stream" in command "run create"
pfazure run show-details -n my_first_cloud_run
pfazure run visualize -n my_first_cloud_run
```
More details can be found in [CLI reference: pfazure](../../../reference/pfazure-command-reference.md)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Submit the flow run
```python
# load flow
flow = "web-classification"
data = "web-classification/data.jsonl"
runtime = "example-runtime-ci" # assume you have existing runtime with this name provisioned
# runtime = None # un-comment use automatic runtime
# create run
base_run = pf.run(
flow=flow,
data=data,
runtime=runtime,
)
pf.stream(base_run)
```
5. View the run info
```python
details = pf.get_details(base_run)
details.head(10)
pf.visualize(base_run)
```
:::
::::
## View the run in workspace
At the end of stream logs, you can find the `portal_url` of the submitted run, click it to view the run in the workspace.
![c_0](../../../media/cloud/azureml/local-to-cloud-run-webview.png)
### Run snapshot of the flow with additional includes
Flows that enabled [additional include](../../../how-to-guides/develop-a-flow/referencing-external-files-or-folders-in-a-flow.md) files can also be submitted for execution in the workspace. Please note that the specific additional include files or folders will be uploaded and organized within the **Files** folder of the run snapshot in the cloud.
![img](../../../media/cloud/azureml/run-with-additional-includes.png)
## Next steps
Learn more about:
- [CLI reference: pfazure](../../../reference/pfazure-command-reference.md)
```{toctree}
:maxdepth: 1
:hidden:
create-run-with-automatic-runtime
```
| promptflow/docs/cloud/azureai/quick-start/index.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/quick-start/index.md",
"repo_id": "promptflow",
"token_count": 2042
} | 0 |
# Deploy a flow using Kubernetes
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
There are four steps to deploy a flow using Kubernetes:
1. Build the flow as docker format.
2. Build the docker image.
3. Create Kubernetes deployment yaml.
4. Apply the deployment.
## Build a flow as docker format
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Note that all dependent connections must be created before building as docker.
```bash
# create connection if not created before
pf connection create --file ../../../examples/connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Use the command below to build a flow as docker format:
```bash
pf flow build --source <path-to-your-flow-folder> --output <your-output-dir> --format docker
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
Click the button below to build a flow as docker format:
![img](../../media/how-to-guides/vscode_export_as_docker.png)
:::
::::
Note that all dependent connections must be created before exporting as docker.
### Docker format folder structure
Exported Dockerfile & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files
- ...
- connections: the folder contains yaml files to create all related connections
- ...
- Dockerfile: the dockerfile to build the image
- start.sh: the script used in `CMD` of `Dockerfile` to start the service
- runit: the folder contains all the runit scripts
- ...
- settings.json: a json file to store the settings of the docker image
- README.md: Simple introduction of the files
## Deploy with Kubernetes
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to deploy with Kubernetes.
Please ensure you have [create the connection](../manage-connections.md#create-a-connection) required by flow, if not, you could
refer to [Setup connection for web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification).
Additionally, please ensure that you have installed all the required dependencies. You can refer to the "Prerequisites" section in the README of the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) for a comprehensive list of prerequisites and installation instructions.
### Build Docker image
Like other Dockerfile, you need to build the image first. You can tag the image with any name you want. In this example, we use `web-classification-serve`.
Then run the command below:
```bash
cd <your-output-dir>
docker build . -t web-classification-serve
```
### Create Kubernetes deployment yaml.
The Kubernetes deployment yaml file acts as a guide for managing your docker container in a Kubernetes pod. It clearly specifies important information like the container image, port configurations, environment variables, and various settings. Below, you'll find a simple deployment template that you can easily customize to meet your needs.
**Note**: You need encode the secret using base64 firstly and input the <encoded_secret> as 'open-ai-connection-api-key' in the deployment configuration. For example, you can run below commands in linux:
```bash
encoded_secret=$(echo -n <your_api_key> | base64)
```
```yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: <your-namespace>
---
apiVersion: v1
kind: Secret
metadata:
name: open-ai-connection-api-key
namespace: <your-namespace>
type: Opaque
data:
open-ai-connection-api-key: <encoded_secret>
---
apiVersion: v1
kind: Service
metadata:
name: web-classification-service
namespace: <your-namespace>
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30123
selector:
app: web-classification-serve-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-classification-serve-app
namespace: <your-namespace>
spec:
selector:
matchLabels:
app: web-classification-serve-app
template:
metadata:
labels:
app: web-classification-serve-app
spec:
containers:
- name: web-classification-serve-container
image: <your-docker-image>
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: OPEN_AI_CONNECTION_API_KEY
valueFrom:
secretKeyRef:
name: open-ai-connection-api-key
key: open-ai-connection-api-key
```
### Apply the deployment.
Before you can deploy your application, ensure that you have set up a Kubernetes cluster and installed [kubectl](https://kubernetes.io/docs/reference/kubectl/) if it's not already installed. In this documentation, we will use [Minikube](https://minikube.sigs.k8s.io/docs/) as an example. To start the cluster, execute the following command:
```bash
minikube start
```
Once your Kubernetes cluster is up and running, you can proceed to deploy your application by using the following command:
```bash
kubectl apply -f deployment.yaml
```
This command will create the necessary pods to run your application within the cluster.
**Note**: You need replace <pod_name> below with your specific pod_name. You can retrieve it by running `kubectl get pods -n web-classification`.
### Retrieve flow service logs of the container
The kubectl logs command is used to retrieve the logs of a container running within a pod, which can be useful for debugging, monitoring, and troubleshooting applications deployed in a Kubernetes cluster.
```bash
kubectl -n <your-namespace> logs <pod-name>
```
#### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in containers.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
You'll need to set up the environment variables in the container to make the connections work.
### Test the endpoint
- Option1:
Once you've started the service, you can establish a connection between a local port and a port on the pod. This allows you to conveniently test the endpoint from your local terminal.
To achieve this, execute the following command:
```bash
kubectl port-forward <pod_name> <local_port>:<container_port> -n <your-namespace>
```
With the port forwarding in place, you can use the curl command to initiate the endpoint test:
```bash
curl http://localhost:<local_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
- Option2:
`minikube service web-classification-service --url -n <your-namespace>` runs as a process, creating a tunnel to the cluster. The command exposes the service directly to any program running on the host operating system.
The command above will retrieve the URL of a service running within a Minikube Kubernetes cluster (e.g. http://<ip>:<assigned_port>), which you can click to interact with the flow service in your web browser. Alternatively, you can use the following command to test the endpoint:
**Note**: Minikube will use its own external port instead of nodePort to listen to the service. So please substitute <assigned_port> with the port obtained above.
```bash
curl http://localhost:<assigned_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/kubernetes). | promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-kubernetes.md/0 | {
"file_path": "promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-kubernetes.md",
"repo_id": "promptflow",
"token_count": 2398
} | 1 |
# Using File Path as Tool Input
Users sometimes need to reference local files within a tool to implement specific logic. To simplify this, we've introduced the `FilePath` input type. This input type enables users to either select an existing file or create a new one, then pass it to a tool, allowing the tool to access the file's content.
In this guide, we will provide a detailed walkthrough on how to use `FilePath` as a tool input. We will also demonstrate the user experience when utilizing this type of tool within a flow.
## Prerequisites
- Please install promptflow package and ensure that its version is 0.1.0b8 or later.
```
pip install promptflow>=0.1.0b8
```
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.1.0 or later.
## Using File Path as Package Tool Input
### How to create a package tool with file path input
Here we use [an existing tool package](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package) as an example. If you want to create your own tool, please refer to [create and use tool package](create-and-use-tool-package.md#create-custom-tool-package).
1. Add a `FilePath` input for your tool, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_file_path_input.py).
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
2. `FilePath` input format in a tool YAML, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_file_path_input.yaml).
```yaml
my_tool_package.tools.tool_with_file_path_input.my_tool:
function: my_tool
inputs:
# yaml format for FilePath input
input_file:
type:
- file_path
input_text:
type:
- string
module: my_tool_package.tools.tool_with_file_path_input
name: Tool with FilePath Input
description: This is a tool to demonstrate the usage of FilePath input
type: python
```
> [!Note] tool yaml file can be generated using a python script. For further details, please refer to [create custom tool package](create-and-use-tool-package.md#create-custom-tool-package).
### Use tool with a file path input in VS Code extension
Follow steps to [build and install your tool package](create-and-use-tool-package.md#build-and-share-the-tool-package) and [use your tool from VS Code extension](create-and-use-tool-package.md#use-your-tool-from-vscode-extension).
Here we use an existing flow to demonstrate the experience, open [this flow](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/flow.dag.yaml) in VS Code extension:
- There is a node named "Tool_with_FilePath_Input" with a `file_path` type input called `input_file`.
- Click the picker icon to open the UI for selecting an existing file or creating a new file to use as input.
![use file path in flow](../../media/how-to-guides/develop-a-tool/use_file_path_in_flow.png)
## Using File Path as Script Tool Input
We can also utilize the `FilePath` input type directly in a script tool, eliminating the need to create a package tool.
1. Initiate an empty flow in the VS Code extension and add a python node titled 'python_node_with_filepath' into it in the Visual Editor page.
2. Select the link `python_node_with_filepath.py` in the node to modify the python method to include a `FilePath` input as shown below, and save the code change.
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
3. Return to the flow Visual Editor page, click the picker icon to launch the UI for selecting an existing file or creating a new file to use as input, here we select [this file](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/hello_method.py) as an example.
![use file path in script tool](../../media/how-to-guides/develop-a-tool/use_file_path_in_script_tool.png)
## FAQ
### What are some practical use cases for this feature?
The `FilePath` input enables several useful workflows:
1. **Dynamically load modules** - As shown in the demo, you can load a Python module from a specific script file selected by the user. This allows flexible custom logic.
2. **Load arbitrary data files** - The tool can load data from files like .csv, .txt, .json, etc. This provides an easy way to inject external data into a tool.
So in summary, `FilePath` input gives tools flexible access to external files provided by users at runtime. This unlocks many useful scenarios like the ones above.
| promptflow/docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md",
"repo_id": "promptflow",
"token_count": 1804
} | 2 |
# Alternative LLMs
This section provides tutorials on incorporating alternative large language models into prompt flow.
```{toctree}
:maxdepth: 1
:hidden:
``` | promptflow/docs/integrations/llms/index.md/0 | {
"file_path": "promptflow/docs/integrations/llms/index.md",
"repo_id": "promptflow",
"token_count": 43
} | 3 |
# Python
## Introduction
Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in PromptFlow.
Users can effortlessly create Python tools, edit code, and verify results with ease.
## Inputs
| Name | Type | Description | Required |
|--------|--------|------------------------------------------------------|---------|
| Code | string | Python code snippet | Yes |
| Inputs | - | List of tool function parameters and its assignments | - |
### Types
| Type | Python example | Description |
|-----------------------------------------------------|---------------------------------|--------------------------------------------|
| int | param: int | Integer type |
| bool | param: bool | Boolean type |
| string | param: str | String type |
| double | param: float | Double type |
| list | param: list or param: List[T] | List type |
| object | param: dict or param: Dict[K, V] | Object type |
| [Connection](../../concepts/concept-connections.md) | param: CustomConnection | Connection type, will be handled specially |
Parameters with `Connection` type annotation will be treated as connection inputs, which means:
- Promptflow extension will show a selector to select the connection.
- During execution time, promptflow will try to find the connection with the name same from parameter value passed in.
Note that `Union[...]` type annotation is supported **ONLY** for connection type,
for example, `param: Union[CustomConnection, OpenAIConnection]`.
## Outputs
The return of the python tool function.
## How to write Python Tool?
### Guidelines
1. Python Tool Code should consist of a complete Python code, including any necessary module imports.
2. Python Tool Code must contain a function decorated with @tool (tool function), serving as the entry point for execution. The @tool decorator should be applied only once within the snippet.
_Below sample defines python tool "my_python_tool", decorated with @tool_
3. Python tool function parameters must be assigned in 'Inputs' section
_Below sample defines inputs "message" and assign with "world"_
4. Python tool function shall have return
_Below sample returns a concatenated string_
### Code
The snippet below shows the basic structure of a tool function. Promptflow will read the function and extract inputs
from function parameters and type annotations.
```python
from promptflow import tool
from promptflow.connections import CustomConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(message: str, my_conn: CustomConnection) -> str:
my_conn_dict = dict(my_conn)
# Do some function call with my_conn_dict...
return 'hello ' + message
```
### Inputs
| Name | Type | Sample Value in Flow Yaml | Value passed to function|
|---------|--------|-------------------------| ------------------------|
| message | string | "world" | "world" |
| my_conn | CustomConnection | "my_conn" | CustomConnection object |
Promptflow will try to find the connection named 'my_conn' during execution time.
### outputs
```python
"hello world"
```
### Keyword Arguments Support
Starting from version 1.0.0 of PromptFlow and version 1.4.0 of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow),
we have introduced support for keyword arguments (kwargs) in the Python tool.
```python
from promptflow import tool
@tool
def print_test(normal_input: str, **kwargs):
for key, value in kwargs.items():
print(f"Key {key}'s value is {value}")
return len(kwargs)
```
When you add `kwargs` in your python tool like above code, you can insert variable number of inputs by the `+Add input` button.
![Screenshot of the kwargs On VScode Prompt Flow extension](../../media/reference/tools-reference/python_tool_kwargs.png) | promptflow/docs/reference/tools-reference/python-tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/python-tool.md",
"repo_id": "promptflow",
"token_count": 1821
} | 4 |
from enum import Enum
from typing import Union
from openai import AzureOpenAI as AzureOpenAIClient, OpenAI as OpenAIClient
from promptflow.tools.common import handle_openai_error, normalize_connection_config
from promptflow.tools.exception import InvalidConnectionType
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
class EmbeddingModel(str, Enum):
TEXT_EMBEDDING_ADA_002 = "text-embedding-ada-002"
TEXT_SEARCH_ADA_DOC_001 = "text-search-ada-doc-001"
TEXT_SEARCH_ADA_QUERY_001 = "text-search-ada-query-001"
@tool
@handle_openai_error()
def embedding(connection: Union[AzureOpenAIConnection, OpenAIConnection], input: str, deployment_name: str = "",
model: EmbeddingModel = EmbeddingModel.TEXT_EMBEDDING_ADA_002):
if isinstance(connection, AzureOpenAIConnection):
client = AzureOpenAIClient(**normalize_connection_config(connection))
return client.embeddings.create(
input=input,
model=deployment_name,
extra_headers={"ms-azure-ai-promptflow-called-from": "aoai-tool"}
).data[0].embedding
elif isinstance(connection, OpenAIConnection):
client = OpenAIClient(**normalize_connection_config(connection))
return client.embeddings.create(
input=input,
model=model
).data[0].embedding
else:
error_message = f"Not Support connection type '{type(connection).__name__}' for embedding api. " \
f"Connection type should be in [AzureOpenAIConnection, OpenAIConnection]."
raise InvalidConnectionType(message=error_message)
| promptflow/src/promptflow-tools/promptflow/tools/embedding.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/embedding.py",
"repo_id": "promptflow",
"token_count": 684
} | 5 |
import os
import re
from io import open
from typing import Any, List, Match, cast
from setuptools import find_namespace_packages, setup
PACKAGE_NAME = "promptflow-tools"
PACKAGE_FOLDER_PATH = "promptflow"
def parse_requirements(file_name: str) -> List[str]:
with open(file_name) as f:
return [
require.strip() for require in f
if require.strip() and not require.startswith('#')
]
# Version extraction inspired from 'requests'
with open(os.path.join(PACKAGE_FOLDER_PATH, "version.txt"), "r") as fd:
version_content = fd.read()
print(version_content)
version = cast(Match[Any], re.search(r'^VERSION\s*=\s*[\'"]([^\'"]*)[\'"]', version_content, re.MULTILINE)).group(1)
if not version:
raise RuntimeError("Cannot find version information")
with open("README.md", encoding="utf-8") as f:
readme = f.read()
with open("CHANGELOG.md", encoding="utf-8") as f:
changelog = f.read()
setup(
name=PACKAGE_NAME,
version=version,
description="Prompt flow built-in tools",
long_description_content_type="text/markdown",
long_description=readme + "\n\n" + changelog,
author="Microsoft Corporation",
author_email="[email protected]",
url="https://github.com/microsoft/promptflow",
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires="<4.0,>=3.8",
install_requires=parse_requirements('requirements.txt'),
extras_require={
"azure": [
# Dependency to list deployment in aoai_gpt4v
"azure-mgmt-cognitiveservices==13.5.0"
]
},
packages=find_namespace_packages(include=[f"{PACKAGE_FOLDER_PATH}.*"]),
entry_points={
"package_tools": ["builtins = promptflow.tools.list:list_package_tools"],
},
include_package_data=True,
project_urls={
"Bug Reports": "https://github.com/microsoft/promptflow/issues",
"Source": "https://github.com/microsoft/promptflow",
},
)
| promptflow/src/promptflow-tools/setup.py/0 | {
"file_path": "promptflow/src/promptflow-tools/setup.py",
"repo_id": "promptflow",
"token_count": 947
} | 6 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import json
from promptflow._cli._params import (
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
add_param_max_results,
base_params,
)
from promptflow._cli._utils import activate_action, exception_handler
from promptflow._sdk._constants import get_list_view_type
from promptflow._sdk._pf_client import PFClient
from promptflow._sdk.entities._experiment import Experiment
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
_client = None
def _get_pf_client():
global _client
if _client is None:
_client = PFClient()
return _client
def add_param_template(parser):
parser.add_argument("--template", type=str, required=True, help="The experiment template path.")
def add_param_name(parser):
parser.add_argument("--name", "-n", type=str, help="The experiment name.")
def add_experiment_create(subparsers):
epilog = """
Examples:
# Create an experiment from a template:
pf experiment create --template flow.exp.yaml
"""
add_params = [add_param_template, add_param_name] + base_params
create_parser = activate_action(
name="create",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Create an experiment.",
action_param_name="sub_action",
)
return create_parser
def add_experiment_list(subparsers):
epilog = """
Examples:
# List all experiments:
pf experiment list
"""
activate_action(
name="list",
description="List all experiments.",
epilog=epilog,
add_params=[
add_param_max_results,
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
]
+ base_params,
subparsers=subparsers,
help_message="List all experiments.",
action_param_name="sub_action",
)
def add_experiment_show(subparsers):
epilog = """
Examples:
# Get and show an experiment:
pf experiment show -n my_experiment
"""
activate_action(
name="show",
description="Show an experiment for promptflow.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Show an experiment for promptflow.",
action_param_name="sub_action",
)
def add_experiment_start(subparsers):
epilog = """
Examples:
# Start an experiment:
pf experiment start -n my_experiment
"""
activate_action(
name="start",
description="Start an experiment.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Start an experiment.",
action_param_name="sub_action",
)
def add_experiment_stop(subparsers):
epilog = """
Examples:
# Stop an experiment:
pf experiment stop -n my_experiment
"""
activate_action(
name="stop",
description="Stop an experiment.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Stop an experiment.",
action_param_name="sub_action",
)
def add_experiment_parser(subparsers):
experiment_parser = subparsers.add_parser(
"experiment",
description="[Experimental] A CLI tool to manage experiment for prompt flow.",
help="[Experimental] pf experiment. This is an experimental feature, and may change at any time.",
)
subparsers = experiment_parser.add_subparsers()
add_experiment_create(subparsers)
add_experiment_list(subparsers)
add_experiment_show(subparsers)
add_experiment_start(subparsers)
add_experiment_stop(subparsers)
experiment_parser.set_defaults(action="experiment")
def dispatch_experiment_commands(args: argparse.Namespace):
if args.sub_action == "create":
create_experiment(args)
elif args.sub_action == "list":
list_experiment(args)
elif args.sub_action == "show":
show_experiment(args)
elif args.sub_action == "start":
start_experiment(args)
elif args.sub_action == "show-status":
pass
elif args.sub_action == "update":
pass
elif args.sub_action == "delete":
pass
elif args.sub_action == "stop":
stop_experiment(args)
elif args.sub_action == "test":
pass
elif args.sub_action == "clone":
pass
@exception_handler("Create experiment")
def create_experiment(args: argparse.Namespace):
from promptflow._sdk._load_functions import _load_experiment_template
template_path = args.template
logger.debug("Loading experiment template from %s", template_path)
template = _load_experiment_template(source=template_path)
logger.debug("Creating experiment from template %s", template.dir_name)
experiment = Experiment.from_template(template, name=args.name)
logger.debug("Creating experiment %s", experiment.name)
exp = _get_pf_client()._experiments.create_or_update(experiment)
print(json.dumps(exp._to_dict(), indent=4))
@exception_handler("List experiment")
def list_experiment(args: argparse.Namespace):
list_view_type = get_list_view_type(archived_only=args.archived_only, include_archived=args.include_archived)
results = _get_pf_client()._experiments.list(args.max_results, list_view_type=list_view_type)
print(json.dumps([result._to_dict() for result in results], indent=4))
@exception_handler("Show experiment")
def show_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.get(args.name)
print(json.dumps(result._to_dict(), indent=4))
@exception_handler("Start experiment")
def start_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.start(args.name)
print(json.dumps(result._to_dict(), indent=4))
@exception_handler("Stop experiment")
def stop_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.stop(args.name)
print(json.dumps(result._to_dict(), indent=4))
| promptflow/src/promptflow/promptflow/_cli/_pf/_experiment.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_experiment.py",
"repo_id": "promptflow",
"token_count": 2454
} | 7 |
The directory structure in the package tool is as follows:
```python
{{ package_name }}
│ setup.py # This file contains metadata about your project like the name, version.
│
│ MANIFEST.in # This file is used to determine which files to include in the distribution of the project.
│
└───{{ package_name }}{{" " * (24 - package_name|length)}}# This is the source directory. All of your project’s source code should be placed in this directory.
{{ tool_name }}.py{{ " " * (17 - tool_name|length)}}# The source code of tools. Using the @tool decorator to identify the function as a tool.
utils.py # Utility functions for the package. A method for listing all tools defined in the package is generated in this file.
__init__.py
```
Please refer to [tool doc](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/index.html) for more details about how to develop a tool. | promptflow/src/promptflow/promptflow/_cli/data/package_tool/README.md.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/package_tool/README.md.jinja2",
"repo_id": "promptflow",
"token_count": 311
} | 8 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import functools
import inspect
import logging
import threading
import time
import uuid
from contextvars import ContextVar
from logging import WARNING
from typing import Callable
from promptflow._core._errors import ToolExecutionError, UnexpectedError
from promptflow._core.cache_manager import AbstractCacheManager, CacheInfo, CacheResult
from promptflow._utils.logger_utils import flow_logger, logger
from promptflow._utils.thread_utils import RepeatLogTimer
from promptflow._utils.utils import generate_elapsed_time_messages
from promptflow.contracts.flow import Node
from promptflow.contracts.run_info import RunInfo
from promptflow.exceptions import PromptflowException
from .run_tracker import RunTracker
from .thread_local_singleton import ThreadLocalSingleton
from .tracer import Tracer
class FlowExecutionContext(ThreadLocalSingleton):
"""The context for a flow execution."""
CONTEXT_VAR_NAME = "Flow"
context_var = ContextVar(CONTEXT_VAR_NAME, default=None)
def __init__(
self,
name,
run_tracker: RunTracker,
cache_manager: AbstractCacheManager = None,
run_id=None,
flow_id=None,
line_number=None,
variant_id=None,
):
self._name = name
self._run_tracker = run_tracker
self._cache_manager = cache_manager or AbstractCacheManager.init_from_env()
self._run_id = run_id or str(uuid.uuid4())
self._flow_id = flow_id or self._run_id
self._line_number = line_number
self._variant_id = variant_id
def copy(self):
return FlowExecutionContext(
name=self._name,
run_tracker=self._run_tracker,
cache_manager=self._cache_manager,
run_id=self._run_id,
flow_id=self._flow_id,
line_number=self._line_number,
variant_id=self._variant_id,
)
def cancel_node_runs(self, msg):
self._run_tracker.cancel_node_runs(msg, self._run_id)
def invoke_tool(self, node: Node, f: Callable, kwargs):
run_info = self._prepare_node_run(node, f, kwargs)
node_run_id = run_info.run_id
traces = []
try:
hit_cache = False
# Get result from cache. If hit cache, no need to execute f.
cache_info: CacheInfo = self._cache_manager.calculate_cache_info(self._flow_id, f, [], kwargs)
if node.enable_cache and cache_info:
cache_result: CacheResult = self._cache_manager.get_cache_result(cache_info)
if cache_result and cache_result.hit_cache:
# Assign cached_flow_run_id and cached_run_id.
run_info.cached_flow_run_id = cache_result.cached_flow_run_id
run_info.cached_run_id = cache_result.cached_run_id
result = cache_result.result
hit_cache = True
if not hit_cache:
Tracer.start_tracing(node_run_id, node.name)
result = self._invoke_tool_with_timer(node, f, kwargs)
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, result=result, traces=traces)
# Record result in cache so that future run might reuse its result.
if not hit_cache and node.enable_cache:
self._persist_cache(cache_info, run_info)
flow_logger.info(f"Node {node.name} completes.")
return result
except Exception as e:
logger.exception(f"Node {node.name} in line {self._line_number} failed. Exception: {e}.")
if not traces:
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
finally:
self._run_tracker.persist_node_run(run_info)
def _prepare_node_run(self, node: Node, f, kwargs={}):
node_run_id = self._generate_node_run_id(node)
flow_logger.info(f"Executing node {node.name}. node run id: {node_run_id}")
parent_run_id = f"{self._run_id}_{self._line_number}" if self._line_number is not None else self._run_id
run_info: RunInfo = self._run_tracker.start_node_run(
node=node.name,
flow_run_id=self._run_id,
parent_run_id=parent_run_id,
run_id=node_run_id,
index=self._line_number,
)
run_info.index = self._line_number
run_info.variant_id = self._variant_id
self._run_tracker.set_inputs(node_run_id, {key: value for key, value in kwargs.items() if key != "self"})
return run_info
async def invoke_tool_async(self, node: Node, f: Callable, kwargs):
if not inspect.iscoroutinefunction(f):
raise UnexpectedError(
message_format="Tool '{function}' in node '{node}' is not a coroutine function.",
function=f,
node=node.name,
)
run_info = self._prepare_node_run(node, f, kwargs=kwargs)
node_run_id = run_info.run_id
traces = []
try:
Tracer.start_tracing(node_run_id, node.name)
result = await self._invoke_tool_async_inner(node, f, kwargs)
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, result=result, traces=traces)
flow_logger.info(f"Node {node.name} completes.")
return result
# User tool should reraise the CancelledError after its own handling logic,
# so that the error can propagate to the scheduler for handling.
# Otherwise, the node would end with Completed status.
except asyncio.CancelledError as e:
logger.info(f"Node {node.name} in line {self._line_number} is canceled.")
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
except Exception as e:
logger.exception(f"Node {node.name} in line {self._line_number} failed. Exception: {e}.")
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
finally:
self._run_tracker.persist_node_run(run_info)
async def _invoke_tool_async_inner(self, node: Node, f: Callable, kwargs):
module = f.func.__module__ if isinstance(f, functools.partial) else f.__module__
try:
return await f(**kwargs)
except PromptflowException as e:
# All the exceptions from built-in tools are PromptflowException.
# For these cases, raise the exception directly.
if module is not None:
e.module = module
raise e
except Exception as e:
# Otherwise, we assume the error comes from user's tool.
# For these cases, raise ToolExecutionError, which is classified as UserError
# and shows stack trace in the error message to make it easy for user to troubleshoot.
raise ToolExecutionError(node_name=node.name, module=module) from e
def _invoke_tool_with_timer(self, node: Node, f: Callable, kwargs):
module = f.func.__module__ if isinstance(f, functools.partial) else f.__module__
node_name = node.name
try:
logging_name = node_name
if self._line_number is not None:
logging_name = f"{node_name} in line {self._line_number}"
interval_seconds = 60
start_time = time.perf_counter()
thread_id = threading.current_thread().ident
with RepeatLogTimer(
interval_seconds=interval_seconds,
logger=logger,
level=WARNING,
log_message_function=generate_elapsed_time_messages,
args=(logging_name, start_time, interval_seconds, thread_id),
):
return f(**kwargs)
except PromptflowException as e:
# All the exceptions from built-in tools are PromptflowException.
# For these cases, raise the exception directly.
if module is not None:
e.module = module
raise e
except Exception as e:
# Otherwise, we assume the error comes from user's tool.
# For these cases, raise ToolExecutionError, which is classified as UserError
# and shows stack trace in the error message to make it easy for user to troubleshoot.
raise ToolExecutionError(node_name=node_name, module=module) from e
def bypass_node(self, node: Node):
"""Update teh bypassed node run info."""
node_run_id = self._generate_node_run_id(node)
flow_logger.info(f"Bypassing node {node.name}. node run id: {node_run_id}")
parent_run_id = f"{self._run_id}_{self._line_number}" if self._line_number is not None else self._run_id
run_info = self._run_tracker.bypass_node_run(
node=node.name,
flow_run_id=self._run_id,
parent_run_id=parent_run_id,
run_id=node_run_id,
index=self._line_number,
variant_id=self._variant_id,
)
self._run_tracker.persist_node_run(run_info)
def _persist_cache(self, cache_info: CacheInfo, run_info: RunInfo):
"""Record result in cache storage if hash_id is valid."""
if cache_info and cache_info.hash_id is not None and len(cache_info.hash_id) > 0:
try:
self._cache_manager.persist_result(run_info, cache_info, self._flow_id)
except Exception as ex:
# Not a critical path, swallow the exception.
logging.warning(f"Failed to persist cache result. run_id: {run_info.run_id}. Exception: {ex}")
def _generate_node_run_id(self, node: Node) -> str:
if node.aggregation:
# For reduce node, the id should be constructed by the flow run info run id
return f"{self._run_id}_{node.name}_reduce"
if self._line_number is None:
return f"{self._run_id}_{node.name}_{uuid.uuid4()}"
return f"{self._run_id}_{node.name}_{self._line_number}"
| promptflow/src/promptflow/promptflow/_core/flow_execution_context.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/flow_execution_context.py",
"repo_id": "promptflow",
"token_count": 4636
} | 9 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
from enum import Enum
from pathlib import Path
LOGGER_NAME = "promptflow"
PROMPT_FLOW_HOME_DIR_ENV_VAR = "PF_HOME_DIRECTORY"
PROMPT_FLOW_DIR_NAME = ".promptflow"
def _prepare_home_dir() -> Path:
"""Prepare prompt flow home directory.
User can configure it by setting environment variable: `PF_HOME_DIRECTORY`;
if not configured, or configured value is not valid, use default value: "~/.promptflow/".
"""
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
if PROMPT_FLOW_HOME_DIR_ENV_VAR in os.environ:
logger.debug(
f"environment variable {PROMPT_FLOW_HOME_DIR_ENV_VAR!r} is set, honor it preparing home directory."
)
try:
pf_home_dir = Path(os.getenv(PROMPT_FLOW_HOME_DIR_ENV_VAR)).resolve()
pf_home_dir.mkdir(parents=True, exist_ok=True)
return pf_home_dir
except Exception as e: # pylint: disable=broad-except
_warning_message = (
"Invalid configuration for prompt flow home directory: "
f"{os.getenv(PROMPT_FLOW_HOME_DIR_ENV_VAR)!r}: {str(e)!r}.\n"
'Fall back to use default value: "~/.promptflow/".'
)
logger.warning(_warning_message)
try:
logger.debug("preparing home directory with default value.")
pf_home_dir = (Path.home() / PROMPT_FLOW_DIR_NAME).resolve()
pf_home_dir.mkdir(parents=True, exist_ok=True)
return pf_home_dir
except Exception as e: # pylint: disable=broad-except
_error_message = (
f"Cannot create prompt flow home directory: {str(e)!r}.\n"
"Please check if you have proper permission to operate the directory "
f"{HOME_PROMPT_FLOW_DIR.as_posix()!r}; or configure it via "
f"environment variable {PROMPT_FLOW_HOME_DIR_ENV_VAR!r}.\n"
)
logger.error(_error_message)
raise Exception(_error_message)
HOME_PROMPT_FLOW_DIR = _prepare_home_dir()
DAG_FILE_NAME = "flow.dag.yaml"
NODE_VARIANTS = "node_variants"
VARIANTS = "variants"
NODES = "nodes"
NODE = "node"
INPUTS = "inputs"
USE_VARIANTS = "use_variants"
DEFAULT_VAR_ID = "default_variant_id"
FLOW_TOOLS_JSON = "flow.tools.json"
FLOW_META_JSON = "flow.json"
FLOW_TOOLS_JSON_GEN_TIMEOUT = 60
PROMPT_FLOW_RUNS_DIR_NAME = ".runs"
PROMPT_FLOW_EXP_DIR_NAME = ".exps"
SERVICE_CONFIG_FILE = "pf.yaml"
PF_SERVICE_PORT_FILE = "pfs.port"
PF_SERVICE_LOG_FILE = "pfs.log"
PF_TRACE_CONTEXT = "PF_TRACE_CONTEXT"
LOCAL_MGMT_DB_PATH = (HOME_PROMPT_FLOW_DIR / "pf.sqlite").resolve()
LOCAL_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "pf.sqlite.lock").resolve()
SCHEMA_INFO_TABLENAME = "schema_info"
RUN_INFO_TABLENAME = "run_info"
RUN_INFO_CREATED_ON_INDEX_NAME = "idx_run_info_created_on"
CONNECTION_TABLE_NAME = "connection"
EXPERIMENT_TABLE_NAME = "experiment"
ORCHESTRATOR_TABLE_NAME = "orchestrator"
EXP_NODE_RUN_TABLE_NAME = "exp_node_run"
EXPERIMENT_CREATED_ON_INDEX_NAME = "idx_experiment_created_on"
BASE_PATH_CONTEXT_KEY = "base_path"
SCHEMA_KEYS_CONTEXT_CONFIG_KEY = "schema_configs_keys"
SCHEMA_KEYS_CONTEXT_SECRET_KEY = "schema_secrets_keys"
PARAMS_OVERRIDE_KEY = "params_override"
FILE_PREFIX = "file:"
KEYRING_SYSTEM = "promptflow"
KEYRING_ENCRYPTION_KEY_NAME = "encryption_key"
KEYRING_ENCRYPTION_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "encryption_key.lock").resolve()
REFRESH_CONNECTIONS_DIR_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "refresh_connections_dir.lock").resolve()
# Note: Use this only for show. Reading input should regard all '*' string as scrubbed, no matter the length.
SCRUBBED_VALUE = "******"
SCRUBBED_VALUE_NO_CHANGE = "<no-change>"
SCRUBBED_VALUE_USER_INPUT = "<user-input>"
CHAT_HISTORY = "chat_history"
WORKSPACE_LINKED_DATASTORE_NAME = "workspaceblobstore"
LINE_NUMBER = "line_number"
AZUREML_PF_RUN_PROPERTIES_LINEAGE = "azureml.promptflow.input_run_id"
AZURE_WORKSPACE_REGEX_FORMAT = (
"^azureml:[/]{1,2}subscriptions/([^/]+)/resource(groups|Groups)/([^/]+)"
"(/providers/Microsoft.MachineLearningServices)?/workspaces/([^/]+)$"
)
DEFAULT_ENCODING = "utf-8"
LOCAL_STORAGE_BATCH_SIZE = 1
LOCAL_SERVICE_PORT = 5000
BULK_RUN_ERRORS = "BulkRunErrors"
RUN_MACRO = "${run}"
VARIANT_ID_MACRO = "${variant_id}"
TIMESTAMP_MACRO = "${timestamp}"
DEFAULT_VARIANT = "variant_0"
# run visualize constants
VIS_HTML_TMPL = Path(__file__).parent / "data" / "visualize.j2"
VIS_JS_BUNDLE_FILENAME = "bulkTestDetails.min.js"
VIS_PORTAL_URL_TMPL = (
"https://ml.azure.com/prompts/flow/bulkrun/runs/outputs"
"?wsid=/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}"
"/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}&runId={names}"
)
REMOTE_URI_PREFIX = "azureml:"
REGISTRY_URI_PREFIX = "azureml://registries/"
FLOW_RESOURCE_ID_PREFIX = "azureml://locations/"
FLOW_DIRECTORY_MACRO_IN_CONFIG = "${flow_directory}"
# Tool meta info
UIONLY_HIDDEN = "uionly_hidden"
SKIP_FUNC_PARAMS = ["subscription_id", "resource_group_name", "workspace_name"]
ICON_DARK = "icon_dark"
ICON_LIGHT = "icon_light"
ICON = "icon"
TOOL_SCHEMA = Path(__file__).parent / "data" / "tool.schema.json"
# trace
TRACE_MGMT_DB_PATH = (HOME_PROMPT_FLOW_DIR / "trace.sqlite").resolve()
TRACE_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "trace.sqlite.lock").resolve()
SPAN_TABLENAME = "span"
PFS_MODEL_DATETIME_FORMAT = "iso8601"
class CustomStrongTypeConnectionConfigs:
PREFIX = "promptflow.connection."
TYPE = "custom_type"
MODULE = "module"
PACKAGE = "package"
PACKAGE_VERSION = "package_version"
PROMPTFLOW_TYPE_KEY = PREFIX + TYPE
PROMPTFLOW_MODULE_KEY = PREFIX + MODULE
PROMPTFLOW_PACKAGE_KEY = PREFIX + PACKAGE
PROMPTFLOW_PACKAGE_VERSION_KEY = PREFIX + PACKAGE_VERSION
@staticmethod
def is_custom_key(key):
return key not in [
CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_VERSION_KEY,
]
class RunTypes:
BATCH = "batch"
EVALUATION = "evaluation"
PAIRWISE_EVALUATE = "pairwise_evaluate"
COMMAND = "command"
class AzureRunTypes:
"""Run types for run entity from index service."""
BATCH = "azureml.promptflow.FlowRun"
EVALUATION = "azureml.promptflow.EvaluationRun"
PAIRWISE_EVALUATE = "azureml.promptflow.PairwiseEvaluationRun"
class RestRunTypes:
"""Run types for run entity from MT service."""
BATCH = "FlowRun"
EVALUATION = "EvaluationRun"
PAIRWISE_EVALUATE = "PairwiseEvaluationRun"
# run document statuses
class RunStatus(object):
# Ordered by transition order
QUEUED = "Queued"
NOT_STARTED = "NotStarted"
PREPARING = "Preparing"
PROVISIONING = "Provisioning"
STARTING = "Starting"
RUNNING = "Running"
CANCEL_REQUESTED = "CancelRequested"
CANCELED = "Canceled"
FINALIZING = "Finalizing"
COMPLETED = "Completed"
FAILED = "Failed"
UNAPPROVED = "Unapproved"
NOTRESPONDING = "NotResponding"
PAUSING = "Pausing"
PAUSED = "Paused"
@classmethod
def list(cls):
"""Return the list of supported run statuses."""
return [
cls.QUEUED,
cls.PREPARING,
cls.PROVISIONING,
cls.STARTING,
cls.RUNNING,
cls.CANCEL_REQUESTED,
cls.CANCELED,
cls.FINALIZING,
cls.COMPLETED,
cls.FAILED,
cls.NOT_STARTED,
cls.UNAPPROVED,
cls.NOTRESPONDING,
cls.PAUSING,
cls.PAUSED,
]
@classmethod
def get_running_statuses(cls):
"""Return the list of running statuses."""
return [
cls.NOT_STARTED,
cls.QUEUED,
cls.PREPARING,
cls.PROVISIONING,
cls.STARTING,
cls.RUNNING,
cls.UNAPPROVED,
cls.NOTRESPONDING,
cls.PAUSING,
cls.PAUSED,
]
@classmethod
def get_post_processing_statuses(cls):
"""Return the list of running statuses."""
return [cls.CANCEL_REQUESTED, cls.FINALIZING]
class FlowRunProperties:
FLOW_PATH = "flow_path"
OUTPUT_PATH = "output_path"
NODE_VARIANT = "node_variant"
RUN = "run"
SYSTEM_METRICS = "system_metrics"
# Experiment command node fields only
COMMAND = "command"
OUTPUTS = "outputs"
class CommonYamlFields:
"""Common yaml fields.
Common yaml fields are used to define the common fields in yaml files. It can be one of the following values: type,
name, $schema.
"""
TYPE = "type"
"""Type."""
NAME = "name"
"""Name."""
SCHEMA = "$schema"
"""Schema."""
MAX_LIST_CLI_RESULTS = 50 # general list
MAX_RUN_LIST_RESULTS = 50 # run list
MAX_SHOW_DETAILS_RESULTS = 100 # show details
class CLIListOutputFormat:
JSON = "json"
TABLE = "table"
class LocalStorageFilenames:
SNAPSHOT_FOLDER = "snapshot"
DAG = DAG_FILE_NAME
FLOW_TOOLS_JSON = FLOW_TOOLS_JSON
INPUTS = "inputs.jsonl"
OUTPUTS = "outputs.jsonl"
DETAIL = "detail.json"
METRICS = "metrics.json"
LOG = "logs.txt"
EXCEPTION = "error.json"
META = "meta.json"
class ListViewType(str, Enum):
ACTIVE_ONLY = "ActiveOnly"
ARCHIVED_ONLY = "ArchivedOnly"
ALL = "All"
def get_list_view_type(archived_only: bool, include_archived: bool) -> ListViewType:
if archived_only and include_archived:
raise Exception("Cannot provide both archived-only and include-archived.")
if include_archived:
return ListViewType.ALL
elif archived_only:
return ListViewType.ARCHIVED_ONLY
else:
return ListViewType.ACTIVE_ONLY
class RunInfoSources(str, Enum):
"""Run sources."""
LOCAL = "local"
INDEX_SERVICE = "index_service"
RUN_HISTORY = "run_history"
MT_SERVICE = "mt_service"
EXISTING_RUN = "existing_run"
class ConfigValueType(str, Enum):
STRING = "String"
SECRET = "Secret"
class ConnectionType(str, Enum):
_NOT_SET = "NotSet"
AZURE_OPEN_AI = "AzureOpenAI"
OPEN_AI = "OpenAI"
QDRANT = "Qdrant"
COGNITIVE_SEARCH = "CognitiveSearch"
SERP = "Serp"
AZURE_CONTENT_SAFETY = "AzureContentSafety"
FORM_RECOGNIZER = "FormRecognizer"
WEAVIATE = "Weaviate"
CUSTOM = "Custom"
ALL_CONNECTION_TYPES = set(
map(lambda x: f"{x.value}Connection", filter(lambda x: x != ConnectionType._NOT_SET, ConnectionType))
)
class ConnectionFields(str, Enum):
CONNECTION = "connection"
DEPLOYMENT_NAME = "deployment_name"
MODEL = "model"
SUPPORTED_CONNECTION_FIELDS = {
ConnectionFields.CONNECTION.value,
ConnectionFields.DEPLOYMENT_NAME.value,
ConnectionFields.MODEL.value,
}
class RunDataKeys:
PORTAL_URL = "portal_url"
DATA = "data"
RUN = "run"
OUTPUT = "output"
class RunHistoryKeys:
RunMetaData = "runMetadata"
HIDDEN = "hidden"
class ConnectionProvider(str, Enum):
LOCAL = "local"
AZUREML = "azureml"
class FlowType:
STANDARD = "standard"
EVALUATION = "evaluation"
CHAT = "chat"
@staticmethod
def get_all_values():
values = [value for key, value in vars(FlowType).items() if isinstance(value, str) and key.isupper()]
return values
CLIENT_FLOW_TYPE_2_SERVICE_FLOW_TYPE = {
FlowType.STANDARD: "default",
FlowType.EVALUATION: "evaluation",
FlowType.CHAT: "chat",
}
SERVICE_FLOW_TYPE_2_CLIENT_FLOW_TYPE = {value: key for key, value in CLIENT_FLOW_TYPE_2_SERVICE_FLOW_TYPE.items()}
class AzureFlowSource:
LOCAL = "local"
PF_SERVICE = "pf_service"
INDEX = "index"
class DownloadedRun:
SNAPSHOT_FOLDER = LocalStorageFilenames.SNAPSHOT_FOLDER
METRICS_FILE_NAME = LocalStorageFilenames.METRICS
LOGS_FILE_NAME = LocalStorageFilenames.LOG
RUN_METADATA_FILE_NAME = "run_metadata.json"
class ExperimentNodeType(object):
FLOW = "flow"
CHAT_GROUP = "chat_group"
COMMAND = "command"
class ExperimentStatus(object):
NOT_STARTED = "NotStarted"
QUEUING = "Queuing"
IN_PROGRESS = "InProgress"
TERMINATED = "Terminated"
class ExperimentNodeRunStatus(object):
NOT_STARTED = "NotStarted"
QUEUING = "Queuing"
IN_PROGRESS = "InProgress"
COMPLETED = "Completed"
FAILED = "Failed"
CANCELED = "Canceled"
class ContextAttributeKey:
EXPERIMENT = "experiment"
# Note: referenced id not used for lineage, only for evaluation
REFERENCED_LINE_RUN_ID = "referenced.line_run_id"
REFERENCED_BATCH_RUN_ID = "referenced.batch_run_id"
class EnvironmentVariables:
"""The environment variables."""
PF_USE_AZURE_CLI_CREDENTIAL = "PF_USE_AZURE_CLI_CREDENTIAL"
| promptflow/src/promptflow/promptflow/_sdk/_constants.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_constants.py",
"repo_id": "promptflow",
"token_count": 5765
} | 10 |
# Prompt Flow Service
This document will describe the usage of pfs(prompt flow service) CLI.
### Start prompt flow service (optional)
If you don't install pfs as a service, you need to start pfs manually.
pfs CLI provides **start** command to start service. You can also use this command to specify the service port.
```commandline
usage: pfs [-h] [-p PORT]
Start prompt flow service.
optional arguments:
-h, --help show this help message and exit
-p PORT, --port PORT port of the promptflow service
```
If you don't specify a port to start service, pfs will first use the port in the configure file in "~/.promptflow/pfs.port".
If not found port configuration or the port is used, pfs will use a random port to start the service.
### Swagger of service
After start the service, it will provide Swagger UI documentation, served from "http://localhost:your-port/v1.0/swagger.json".
For details, please refer to [swagger.json](./swagger.json).
#### Generate C# client
1. Right click the project, Add -> Rest API Client... -> Generate with OpenAPI Generator
2. It will open a dialog, fill in the file name and swagger url, it will generate the client under the project.
For details, please refer to [REST API Client Code Generator](https://marketplace.visualstudio.com/items?itemName=ChristianResmaHelle.ApiClientCodeGenerator2022). | promptflow/src/promptflow/promptflow/_sdk/_service/README.md/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/README.md",
"repo_id": "promptflow",
"token_count": 387
} | 11 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import getpass
import socket
import time
from dataclasses import InitVar, dataclass, field
from datetime import datetime
from functools import wraps
import psutil
import requests
from flask import abort, make_response, request
from promptflow._sdk._constants import DEFAULT_ENCODING, HOME_PROMPT_FLOW_DIR, PF_SERVICE_PORT_FILE
from promptflow._sdk._errors import ConnectionNotFoundError, RunNotFoundError
from promptflow._sdk._utils import read_write_by_user
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow._utils.yaml_utils import dump_yaml, load_yaml
from promptflow._version import VERSION
from promptflow.exceptions import PromptflowException, UserErrorException
logger = get_cli_sdk_logger()
def local_user_only(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Get the user name from request.
user = request.environ.get("REMOTE_USER") or request.headers.get("X-Remote-User")
if user != getpass.getuser():
abort(403)
return func(*args, **kwargs)
return wrapper
def get_port_from_config(create_if_not_exists=False):
(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE).touch(mode=read_write_by_user(), exist_ok=True)
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "r", encoding=DEFAULT_ENCODING) as f:
service_config = load_yaml(f) or {}
port = service_config.get("service", {}).get("port", None)
if not port and create_if_not_exists:
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "w", encoding=DEFAULT_ENCODING) as f:
# Set random port to ~/.promptflow/pf.yaml
port = get_random_port()
service_config["service"] = service_config.get("service", {})
service_config["service"]["port"] = port
dump_yaml(service_config, f)
return port
def dump_port_to_config(port):
# Set port to ~/.promptflow/pf.port, if already have a port in file , will overwrite it.
(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE).touch(mode=read_write_by_user(), exist_ok=True)
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "r", encoding=DEFAULT_ENCODING) as f:
service_config = load_yaml(f) or {}
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "w", encoding=DEFAULT_ENCODING) as f:
service_config["service"] = service_config.get("service", {})
service_config["service"]["port"] = port
dump_yaml(service_config, f)
def is_port_in_use(port: int):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def get_random_port():
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("localhost", 0))
return s.getsockname()[1]
def _get_process_by_port(port):
for proc in psutil.process_iter(["pid", "connections", "create_time"]):
try:
for connection in proc.connections():
if connection.laddr.port == port:
return proc
except psutil.AccessDenied:
pass
def kill_exist_service(port):
proc = _get_process_by_port(port)
if proc:
proc.terminate()
proc.wait(10)
def get_started_service_info(port):
service_info = {}
proc = _get_process_by_port(port)
if proc:
create_time = proc.info["create_time"]
process_uptime = datetime.now() - datetime.fromtimestamp(create_time)
service_info["create_time"] = str(datetime.fromtimestamp(create_time))
service_info["uptime"] = str(process_uptime)
service_info["port"] = port
return service_info
def make_response_no_content():
return make_response("", 204)
def is_pfs_service_healthy(pfs_port) -> bool:
"""Check if pfs service is running."""
try:
response = requests.get("http://localhost:{}/heartbeat".format(pfs_port))
if response.status_code == 200:
logger.debug(f"Pfs service is already running on port {pfs_port}.")
return True
except Exception: # pylint: disable=broad-except
pass
logger.warning(f"Pfs service can't be reached through port {pfs_port}, will try to start/force restart pfs.")
return False
def check_pfs_service_status(pfs_port, time_delay=5, time_threshold=30) -> bool:
wait_time = 0
is_healthy = False
while is_healthy is False and time_threshold > wait_time:
logger.info(
f"Pfs service is not ready. It has been waited for {wait_time}s, will wait for at most "
f"{time_threshold}s."
)
wait_time += time_delay
time.sleep(time_delay)
is_healthy = is_pfs_service_healthy(pfs_port)
return is_healthy
@dataclass
class ErrorInfo:
exception: InitVar[Exception]
code: str = field(init=False)
message: str = field(init=False)
message_format: str = field(init=False, default=None)
message_parameters: dict = field(init=False, default=None)
target: str = field(init=False, default=None)
module: str = field(init=False, default=None)
reference_code: str = field(init=False, default=None)
inner_exception: dict = field(init=False, default=None)
additional_info: dict = field(init=False, default=None)
error_codes: list = field(init=False, default=None)
def __post_init__(self, exception):
if isinstance(exception, PromptflowException):
self.code = "PromptflowError"
if isinstance(exception, (UserErrorException, ConnectionNotFoundError, RunNotFoundError)):
self.code = "UserError"
self.message = exception.message
self.message_format = exception.message_format
self.message_parameters = exception.message_parameters
self.target = exception.target
self.module = exception.module
self.reference_code = exception.reference_code
self.inner_exception = exception.inner_exception
self.additional_info = exception.additional_info
self.error_codes = exception.error_codes
else:
self.code = "ServiceError"
self.message = str(exception)
@dataclass
class FormattedException:
exception: InitVar[Exception]
status_code: InitVar[int] = 500
error: ErrorInfo = field(init=False)
time: str = field(init=False)
def __post_init__(self, exception, status_code):
self.status_code = status_code
if isinstance(exception, (UserErrorException, ConnectionNotFoundError, RunNotFoundError)):
self.status_code = 404
self.error = ErrorInfo(exception)
self.time = datetime.now().isoformat()
def build_pfs_user_agent():
extra_agent = f"local_pfs/{VERSION}"
if request.user_agent.string:
return f"{request.user_agent.string} {extra_agent}"
return extra_agent
def get_client_from_request() -> "PFClient":
from promptflow._sdk._pf_client import PFClient
return PFClient(user_agent=build_pfs_user_agent())
| promptflow/src/promptflow/promptflow/_sdk/_service/utils/utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/utils/utils.py",
"repo_id": "promptflow",
"token_count": 2835
} | 12 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from enum import Enum
from typing import Dict, Sequence, Set, List, Any
from promptflow._utils.exception_utils import ErrorResponse
from promptflow.contracts.run_info import FlowRunInfo, RunInfo, Status
# define metrics dimension keys
FLOW_KEY = "flow"
RUN_STATUS_KEY = "run_status"
NODE_KEY = "node"
LLM_ENGINE_KEY = "llm_engine"
TOKEN_TYPE_KEY = "token_type"
RESPONSE_CODE_KEY = "response_code"
EXCEPTION_TYPE_KEY = "exception"
STREAMING_KEY = "streaming"
API_CALL_KEY = "api_call"
RESPONSE_TYPE_KEY = "response_type" # firstbyte, lastbyte, default
HISTOGRAM_BOUNDARIES: Sequence[float] = (
1.0,
5.0,
10.0,
25.0,
50.0,
75.0,
100.0,
250.0,
500.0,
750.0,
1000.0,
2500.0,
5000.0,
7500.0,
10000.0,
25000.0,
50000.0,
75000.0,
100000.0,
300000.0,
)
class ResponseType(Enum):
# latency from receiving the request to sending the first byte of response, only applicable to streaming flow
FirstByte = "firstbyte"
# latency from receiving the request to sending the last byte of response, only applicable to streaming flow
LastByte = "lastbyte"
# latency from receiving the request to sending the whole response, only applicable to non-streaming flow
Default = "default"
class LLMTokenType(Enum):
PromptTokens = "prompt_tokens"
CompletionTokens = "completion_tokens"
try:
from opentelemetry import metrics
from opentelemetry.metrics import set_meter_provider
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.view import ExplicitBucketHistogramAggregation, SumAggregation, View
# define meter
meter = metrics.get_meter_provider().get_meter("Promptflow Standard Metrics")
# define metrics
token_consumption = meter.create_counter("Token_Consumption")
flow_latency = meter.create_histogram("Flow_Latency")
node_latency = meter.create_histogram("Node_Latency")
flow_request = meter.create_counter("Flow_Request")
remote_api_call_latency = meter.create_histogram("RPC_Latency")
remote_api_call_request = meter.create_counter("RPC_Request")
node_request = meter.create_counter("Node_Request")
# metrics for streaming
streaming_response_duration = meter.create_histogram("Flow_Streaming_Response_Duration")
# define metrics views
# token view
token_view = View(
instrument_name="Token_Consumption",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, LLM_ENGINE_KEY, TOKEN_TYPE_KEY},
aggregation=SumAggregation(),
)
# latency view
flow_latency_view = View(
instrument_name="Flow_Latency",
description="",
attribute_keys={FLOW_KEY, RESPONSE_CODE_KEY, STREAMING_KEY, RESPONSE_TYPE_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
node_latency_view = View(
instrument_name="Node_Latency",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, RUN_STATUS_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
flow_streaming_response_duration_view = View(
instrument_name="Flow_Streaming_Response_Duration",
description="during between sending the first byte and last byte of the response, only for streaming flow",
attribute_keys={FLOW_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
# request view
request_view = View(
instrument_name="Flow_Request",
description="",
attribute_keys={FLOW_KEY, RESPONSE_CODE_KEY, STREAMING_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
node_request_view = View(
instrument_name="Node_Request",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, RUN_STATUS_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
# Remote API call view
remote_api_call_latency_view = View(
instrument_name="RPC_Latency",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, API_CALL_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
remote_api_call_request_view = View(
instrument_name="RPC_Request",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, API_CALL_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
metrics_enabled = True
except ImportError:
metrics_enabled = False
class MetricsRecorder(object):
"""OpenTelemetry Metrics Recorder"""
def __init__(self, logger, reader=None, common_dimensions: Dict[str, str] = None) -> None:
"""initialize metrics recorder
:param logger: logger
:type logger: Logger
:param reader: metric reader
:type reader: opentelemetry.sdk.metrics.export.MetricReader
:param common_dimensions: common dimensions for all metrics
:type common_dimensions: Dict[str, str]
"""
self.logger = logger
if not metrics_enabled:
logger.warning(
"OpenTelemetry metric is not enabled, metrics will not be recorded."
+ "If you want to collect metrics, please enable 'azureml-serving' extra requirement "
+ "for promptflow: 'pip install promptflow[azureml-serving]'"
)
return
self.common_dimensions = common_dimensions or {}
self.reader = reader
dimension_keys = {key for key in common_dimensions}
self._config_common_monitor(dimension_keys, reader)
logger.info("OpenTelemetry metric is enabled, metrics will be recorded.")
def record_flow_request(self, flow_id: str, response_code: int, exception: str, streaming: bool):
if not metrics_enabled:
return
try:
flow_request.add(
1,
{
FLOW_KEY: flow_id,
RESPONSE_CODE_KEY: str(response_code),
EXCEPTION_TYPE_KEY: exception,
STREAMING_KEY: str(streaming),
**self.common_dimensions,
},
)
except Exception as e:
self.logger.warning("failed to record flow request metrics: %s", e)
def record_flow_latency(
self, flow_id: str, response_code: int, streaming: bool, response_type: str, duration: float
):
if not metrics_enabled:
return
try:
flow_latency.record(
duration,
{
FLOW_KEY: flow_id,
RESPONSE_CODE_KEY: str(response_code),
STREAMING_KEY: str(streaming),
RESPONSE_TYPE_KEY: response_type,
**self.common_dimensions,
},
)
except Exception as e:
self.logger.warning("failed to record flow latency metrics: %s", e)
def record_flow_streaming_response_duration(self, flow_id: str, duration: float):
if not metrics_enabled:
return
try:
streaming_response_duration.record(duration, {FLOW_KEY: flow_id, **self.common_dimensions})
except Exception as e:
self.logger.warning("failed to record streaming duration metrics: %s", e)
def record_tracing_metrics(self, flow_run: FlowRunInfo, node_runs: Dict[str, RunInfo]):
if not metrics_enabled:
return
try:
for _, run in node_runs.items():
flow_id = flow_run.flow_id if flow_run is not None else "default"
if len(run.system_metrics) > 0:
duration = run.system_metrics.get("duration", None)
if duration is not None:
duration = duration * 1000
node_latency.record(
duration,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
RUN_STATUS_KEY: run.status.value,
**self.common_dimensions,
},
)
# openai token metrics
inputs = run.inputs or {}
engine = inputs.get("deployment_name") or ""
for token_type in [LLMTokenType.PromptTokens.value, LLMTokenType.CompletionTokens.value]:
count = run.system_metrics.get(token_type, None)
if count:
token_consumption.add(
count,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
LLM_ENGINE_KEY: engine,
TOKEN_TYPE_KEY: token_type,
**self.common_dimensions,
},
)
# record node request metric
err = None
if run.status != Status.Completed:
err = "unknown"
if isinstance(run.error, dict):
err = self._get_exact_error(run.error)
elif isinstance(run.error, str):
err = run.error
node_request.add(
1,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
RUN_STATUS_KEY: run.status.value,
EXCEPTION_TYPE_KEY: err,
**self.common_dimensions,
},
)
if run.api_calls and len(run.api_calls) > 0:
for api_call in run.api_calls:
# since first layer api_call is the node call itself, we ignore them here
api_calls: List[Dict[str, Any]] = api_call.get("children", None)
if api_calls is None:
continue
self._record_api_call_metrics(flow_id, run.node, api_calls)
except Exception as e:
self.logger.warning(f"failed to record metrics: {e}, flow_run: {flow_run}, node_runs: {node_runs}")
def _record_api_call_metrics(self, flow_id, node, api_calls: List[Dict[str, Any]], prefix: str = None):
if api_calls and len(api_calls) > 0:
for api_call in api_calls:
cur_name = api_call.get("name")
api_name = f"{prefix}_{cur_name}" if prefix else cur_name
# api-call latency metrics
# sample data: {"start_time":1688462182.744916, "end_time":1688462184.280989}
start_time = api_call.get("start_time", None)
end_time = api_call.get("end_time", None)
if start_time and end_time:
api_call_latency_ms = (end_time - start_time) * 1000
remote_api_call_latency.record(
api_call_latency_ms,
{
FLOW_KEY: flow_id,
NODE_KEY: node,
API_CALL_KEY: api_name,
**self.common_dimensions,
},
)
# remote api call request metrics
err = api_call.get("error") or {}
if isinstance(err, dict):
exception_type = self._get_exact_error(err)
else:
exception_type = err
remote_api_call_request.add(
1,
{
FLOW_KEY: flow_id,
NODE_KEY: node,
API_CALL_KEY: api_name,
EXCEPTION_TYPE_KEY: exception_type,
**self.common_dimensions,
},
)
child_api_calls = api_call.get("children", None)
if child_api_calls:
self._record_api_call_metrics(flow_id, node, child_api_calls, api_name)
def _get_exact_error(self, err: Dict):
error_response = ErrorResponse.from_error_dict(err)
return error_response.innermost_error_code
# configure monitor, by default only expose prometheus metrics
def _config_common_monitor(self, common_keys: Set[str] = {}, reader=None):
metrics_views = [
token_view,
flow_latency_view,
node_latency_view,
request_view,
remote_api_call_latency_view,
remote_api_call_request_view,
]
for view in metrics_views:
view._attribute_keys.update(common_keys)
readers = []
if reader:
readers.append(reader)
meter_provider = MeterProvider(
metric_readers=readers,
views=metrics_views,
)
set_meter_provider(meter_provider)
| promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/metrics.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/metrics.py",
"repo_id": "promptflow",
"token_count": 6653
} | 13 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from promptflow._version import VERSION
USER_AGENT = "{}/{}".format("promptflow-sdk", VERSION)
| promptflow/src/promptflow/promptflow/_sdk/_user_agent.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_user_agent.py",
"repo_id": "promptflow",
"token_count": 57
} | 14 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from os import PathLike
from typing import Union
# TODO(2528165): remove this file when we deprecate Flow.run_bulk
class BaseInputs(object):
def __init__(self, data: Union[str, PathLike], inputs_mapping: dict = None, **kwargs):
self.data = data
self.inputs_mapping = inputs_mapping
class BulkInputs(BaseInputs):
"""Bulk run inputs.
data: pointer to test data for standard runs
inputs_mapping: define a data flow logic to map input data, support:
from data: data.col1:
Example:
{"question": "${data.question}", "context": "${data.context}"}
"""
# TODO: support inputs_mapping for bulk run
pass
class EvalInputs(BaseInputs):
"""Evaluation flow run inputs.
data: pointer to test data (of variant bulk runs) for eval runs
variant:
variant run id or variant run
keep lineage between current run and variant runs
variant outputs can be referenced as ${batch_run.outputs.col_name} in inputs_mapping
baseline:
baseline run id or baseline run
baseline bulk run for eval runs for pairwise comparison
inputs_mapping: define a data flow logic to map input data, support:
from data: data.col1:
from variant:
[0].col1, [1].col2: if need different col from variant run data
variant.output.col1: if all upstream runs has col1
Example:
{"ground_truth": "${data.answer}", "prediction": "${batch_run.outputs.answer}"}
"""
def __init__(
self,
data: Union[str, PathLike],
variant: Union[str, "BulkRun"] = None, # noqa: F821
baseline: Union[str, "BulkRun"] = None, # noqa: F821
inputs_mapping: dict = None,
**kwargs
):
super().__init__(data=data, inputs_mapping=inputs_mapping, **kwargs)
self.variant = variant
self.baseline = baseline
| promptflow/src/promptflow/promptflow/_sdk/entities/_run_inputs.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/entities/_run_inputs.py",
"repo_id": "promptflow",
"token_count": 771
} | 15 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
from datetime import datetime
from enum import Enum
from traceback import TracebackException, format_tb
from types import TracebackType, FrameType
from promptflow.exceptions import PromptflowException, SystemErrorException, UserErrorException, ValidationException
ADDITIONAL_INFO_USER_EXECUTION_ERROR = "ToolExecutionErrorDetails"
ADDITIONAL_INFO_USER_CODE_STACKTRACE = "UserCodeStackTrace"
CAUSE_MESSAGE = "\nThe above exception was the direct cause of the following exception:\n\n"
CONTEXT_MESSAGE = "\nDuring handling of the above exception, another exception occurred:\n\n"
TRACEBACK_MESSAGE = "Traceback (most recent call last):\n"
class RootErrorCode:
USER_ERROR = "UserError"
SYSTEM_ERROR = "SystemError"
class ResponseCode(str, Enum):
SUCCESS = "200"
ACCEPTED = "202"
REDIRECTION = "300"
CLIENT_ERROR = "400"
SERVICE_ERROR = "500"
UNKNOWN = "0"
class ErrorResponse:
"""A class that represents the response body when an error occurs.
It follows the following specification:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
"""
def __init__(self, error_dict):
self._error_dict = error_dict
@staticmethod
def from_error_dict(error_dict):
"""Create an ErrorResponse from an error dict.
The error dict which usually is generated by ExceptionPresenter.create(exception).to_dict()
"""
return ErrorResponse(error_dict)
@staticmethod
def from_exception(ex: Exception, *, include_debug_info=False):
presenter = ExceptionPresenter.create(ex)
error_dict = presenter.to_dict(include_debug_info=include_debug_info)
return ErrorResponse(error_dict)
@property
def message(self):
return self._error_dict.get("message", "")
@property
def response_code(self):
"""Given the error code, return the corresponding http response code."""
root_error_code = self._error_dict.get("code")
return ResponseCode.CLIENT_ERROR if root_error_code == RootErrorCode.USER_ERROR else ResponseCode.SERVICE_ERROR
@property
def additional_info(self):
"""Return the additional info of the error.
The additional info is defined in the error response.
It is stored as a list of dict, each of which contains a "type" and "info" field.
We change the list of dict to a dict of dict for easier access.
"""
result = {}
list_of_dict = self._error_dict.get("additionalInfo")
if not list_of_dict or not isinstance(list_of_dict, list):
return result
for item in list_of_dict:
# We just ignore the item if it is not a dict or does not contain the required fields.
if not isinstance(item, dict):
continue
name = item.get("type")
info = item.get("info")
if not name or not info:
continue
result[name] = info
return result
def get_additional_info(self, name):
"""Get the additional info by name."""
return self.additional_info.get(name)
def get_user_execution_error_info(self):
"""Get user tool execution error info from additional info."""
user_execution_error_info = self.get_additional_info(ADDITIONAL_INFO_USER_EXECUTION_ERROR)
if not user_execution_error_info or not isinstance(user_execution_error_info, dict):
return {}
return user_execution_error_info
def to_dict(self):
from promptflow._core.operation_context import OperationContext
return {
"error": self._error_dict,
"correlation": None, # TODO: to be implemented
"environment": None, # TODO: to be implemented
"location": None, # TODO: to be implemented
"componentName": OperationContext.get_instance().get_user_agent(),
"time": datetime.utcnow().isoformat(),
}
def to_simplified_dict(self):
return {
"error": {
"code": self._error_dict.get("code"),
"message": self._error_dict.get("message"),
}
}
@property
def error_codes(self):
error = self._error_dict
error_codes = []
while error is not None:
code = error.get("code")
if code is not None:
error_codes.append(code)
error = error.get("innerError")
else:
break
return error_codes
@property
def error_code_hierarchy(self):
"""Get the code hierarchy from error dict."""
return "/".join(self.error_codes)
@property
def innermost_error_code(self):
error_codes = self.error_codes
if error_codes:
return error_codes[-1]
return None
class ExceptionPresenter:
"""A class that can extract information from the exception instance.
It is designed to work for both PromptflowException and other exceptions.
"""
def __init__(self, ex: Exception):
self._ex = ex
@staticmethod
def create(ex: Exception):
if isinstance(ex, PromptflowException):
return PromptflowExceptionPresenter(ex)
return ExceptionPresenter(ex)
@property
def formatted_traceback(self):
te = TracebackException.from_exception(self._ex)
return "".join(te.format())
@property
def debug_info(self):
return self.build_debug_info(self._ex)
def build_debug_info(self, ex: Exception):
inner_exception: dict = None
stack_trace = TRACEBACK_MESSAGE + "".join(format_tb(ex.__traceback__))
if ex.__cause__ is not None:
inner_exception = self.build_debug_info(ex.__cause__)
stack_trace = CAUSE_MESSAGE + stack_trace
elif ex.__context__ is not None and not ex.__suppress_context__:
inner_exception = self.build_debug_info(ex.__context__)
stack_trace = CONTEXT_MESSAGE + stack_trace
return {
"type": ex.__class__.__qualname__,
"message": str(ex),
"stackTrace": stack_trace,
"innerException": inner_exception,
}
@property
def error_codes(self):
"""The hierarchy of the error codes.
We follow the "Microsoft REST API Guidelines" to define error codes in a hierarchy style.
See the below link for details:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
This method returns the error codes in a list. It will be converted into a nested json format by
error_code_recursed.
"""
return [infer_error_code_from_class(SystemErrorException), self._ex.__class__.__name__]
@property
def error_code_recursed(self):
"""Returns a dict of the error codes for this exception.
It is populated in a recursive manner, using the source from `error_codes` property.
i.e. For PromptflowException, such as ToolExcutionError which inherits from UserErrorException,
The result would be:
{
"code": "UserError",
"innerError": {
"code": "ToolExecutionError",
"innerError": None,
},
}
For other exception types, such as ValueError, the result would be:
{
"code": "SystemError",
"innerError": {
"code": "ValueError",
"innerError": None,
},
}
"""
current_error = None
reversed_error_codes = reversed(self.error_codes) if self.error_codes else []
for code in reversed_error_codes:
current_error = {
"code": code,
"innerError": current_error,
}
return current_error
def to_dict(self, *, include_debug_info=False):
"""Return a dict representation of the exception.
This dict specification corresponds to the specification of the Microsoft API Guidelines:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
Note that this dict represents the "error" field in the response body of the API.
The whole error response is then populated in another place outside of this class.
"""
if isinstance(self._ex, JsonSerializedPromptflowException):
return self._ex.to_dict(include_debug_info=include_debug_info)
# Otherwise, return general dict representation of the exception.
result = {"message": str(self._ex), "messageFormat": "", "messageParameters": {}}
result.update(self.error_code_recursed)
if include_debug_info:
result["debugInfo"] = self.debug_info
return result
class PromptflowExceptionPresenter(ExceptionPresenter):
@property
def error_codes(self):
"""The hierarchy of the error codes.
We follow the "Microsoft REST API Guidelines" to define error codes in a hierarchy style.
See the below link for details:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
For subclass of PromptflowException, use the ex.error_codes directly.
For PromptflowException (not a subclass), the ex.error_code is None.
The result should be:
["SystemError", {inner_exception type name if exist}]
"""
if self._ex.error_codes:
return self._ex.error_codes
# For PromptflowException (not a subclass), the ex.error_code is None.
# Handle this case specifically.
error_codes = [infer_error_code_from_class(SystemErrorException)]
if self._ex.inner_exception:
error_codes.append(infer_error_code_from_class(self._ex.inner_exception.__class__))
return error_codes
def to_dict(self, *, include_debug_info=False):
result = {
"message": self._ex.message,
"messageFormat": self._ex.message_format,
"messageParameters": self._ex.serializable_message_parameters,
"referenceCode": self._ex.reference_code,
}
result.update(self.error_code_recursed)
if self._ex.additional_info:
result["additionalInfo"] = [{"type": k, "info": v} for k, v in self._ex.additional_info.items()]
if include_debug_info:
result["debugInfo"] = self.debug_info
return result
class JsonSerializedPromptflowException(Exception):
"""Json serialized PromptflowException.
This exception only has one argument message to avoid the
argument missing error when load/dump with pickle in multiprocessing.
Ref: https://bugs.python.org/issue32696
:param message: A Json serialized message describing the error.
:type message: str
"""
def __init__(self, message):
self.message = message
super().__init__(self.message)
def __str__(self):
return self.message
def to_dict(self, *, include_debug_info=False):
# Return a dict representation of the inner exception.
error_dict = json.loads(self.message)
# The original serialized error might contain debugInfo.
# We pop it out if include_debug_info is set to False.
if not include_debug_info and "debugInfo" in error_dict:
error_dict.pop("debugInfo")
return error_dict
def get_tb_next(tb: TracebackType, next_cnt: int):
"""Return the nth tb_next of input tb.
If the tb does not have n tb_next, return the last tb which has a value.
n = next_cnt
"""
while tb.tb_next and next_cnt > 0:
tb = tb.tb_next
next_cnt -= 1
return tb
def last_frame_info(ex: Exception):
"""Return the line number where the error occurred."""
if ex:
tb = TracebackException.from_exception(ex)
last_frame = tb.stack[-1] if tb.stack else None
if last_frame:
return {
"filename": last_frame.filename,
"lineno": last_frame.lineno,
"name": last_frame.name,
}
return {}
def infer_error_code_from_class(cls):
# Python has a built-in SystemError
if cls == SystemErrorException:
return RootErrorCode.SYSTEM_ERROR
if cls == UserErrorException:
return RootErrorCode.USER_ERROR
if cls == ValidationException:
return "ValidationError"
return cls.__name__
def is_pf_core_frame(frame: FrameType):
"""Check if the frame is from promptflow core code."""
from promptflow import _core
folder_of_core = os.path.dirname(_core.__file__)
return folder_of_core in frame.f_code.co_filename
def remove_suffix(text: str, suffix: str = None):
"""
Given a string, removes specified suffix, if it has.
>>> remove_suffix('hello world', 'world')
'hello '
>>> remove_suffix('hello world', 'hello ')
'hello world'
>>> remove_suffix('NoColumnFoundError', 'Error')
'NoColumnFound'
:param text: string from which prefix will be removed.
:param suffix: suffix to be removed.
:return: string removed suffix.
"""
if not text or not suffix:
return text
if not text.endswith(suffix):
return text
return text[: -len(suffix)]
| promptflow/src/promptflow/promptflow/_utils/exception_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/exception_utils.py",
"repo_id": "promptflow",
"token_count": 5411
} | 16 |
from io import StringIO
from os import PathLike
from typing import IO, AnyStr, Dict, Optional, Union
from ruamel.yaml import YAML, YAMLError
from promptflow._constants import DEFAULT_ENCODING
from promptflow._utils._errors import YamlParseError
def load_yaml(source: Optional[Union[AnyStr, PathLike, IO]]) -> Dict:
# null check - just return an empty dict.
# Certain CLI commands rely on this behavior to produce a resource
# via CLI, which is then populated through CLArgs.
"""Load a local YAML file or a readable stream object.
.. note::
1. For a local file yaml
.. code-block:: python
yaml_path = "path/to/yaml"
content = load_yaml(yaml_path)
2. For a readable stream object
.. code-block:: python
with open("path/to/yaml", "r", encoding="utf-8") as f:
content = load_yaml(f)
:param source: The relative or absolute path to the local file, or a readable stream object.
:type source: str
:return: A dictionary representation of the local file's contents.
:rtype: Dict
"""
if source is None:
return {}
# pylint: disable=redefined-builtin
input = None
must_open_file = False
try: # check source type by duck-typing it as an IOBase
readable = source.readable()
if not readable: # source is misformatted stream or file
msg = "File Permissions Error: The already-open \n\n inputted file is not readable."
raise Exception(msg)
# source is an already-open stream or file, we can read() from it directly.
input = source
except AttributeError:
# source has no writable() function, assume it's a string or file path.
must_open_file = True
if must_open_file: # If supplied a file path, open it.
try:
input = open(source, "r", encoding=DEFAULT_ENCODING)
except OSError: # FileNotFoundError introduced in Python 3
msg = "No such file or directory: {}"
raise Exception(msg.format(source))
# input should now be a readable file or stream. Parse it.
cfg = {}
try:
yaml = YAML()
yaml.preserve_quotes = True
cfg = yaml.load(input)
except YAMLError as e:
msg = f"Error while parsing yaml file: {source} \n\n {str(e)}"
raise Exception(msg)
finally:
if must_open_file:
input.close()
return cfg
def load_yaml_string(yaml_string: str):
"""Load a yaml string.
.. code-block:: python
yaml_string = "some yaml string"
object = load_yaml_string(yaml_string)
:param yaml_string: A yaml string.
:type yaml_string: str
"""
yaml = YAML()
yaml.preserve_quotes = True
return yaml.load(yaml_string)
def dump_yaml(*args, **kwargs):
"""Dump data to a yaml string or stream.
.. note::
1. Dump to a yaml string
.. code-block:: python
data = {"key": "value"}
yaml_string = dump_yaml(data)
2. Dump to a stream
.. code-block:: python
data = {"key": "value"}
with open("path/to/yaml", "w", encoding="utf-8") as f:
dump_yaml(data, f)
"""
yaml = YAML()
yaml.default_flow_style = False
# when using with no stream parameter but just the data, dump to yaml string and return
if len(args) == 1:
string_stream = StringIO()
yaml.dump(args[0], string_stream, **kwargs)
output_string = string_stream.getvalue()
string_stream.close()
return output_string
# when using with stream parameter, dump to stream. e.g.:
# open('test.yaml', 'w', encoding='utf-8') as f:
# dump_yaml(data, f)
elif len(args) == 2:
return yaml.dump(*args, **kwargs)
else:
raise YamlParseError("Only 1 or 2 positional arguments are allowed for dump yaml util function.")
| promptflow/src/promptflow/promptflow/_utils/yaml_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/yaml_utils.py",
"repo_id": "promptflow",
"token_count": 1625
} | 17 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import TYPE_CHECKING
from azure.core.configuration import Configuration
from azure.core.pipeline import policies
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Optional
VERSION = "unknown"
class AzureMachineLearningDesignerServiceClientConfiguration(Configuration):
"""Configuration for AzureMachineLearningDesignerServiceClient.
Note that all parameters used to create this instance are saved as instance
attributes.
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
"""
def __init__(
self,
api_version="1.0.0", # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> None
super(AzureMachineLearningDesignerServiceClientConfiguration, self).__init__(**kwargs)
self.api_version = api_version
kwargs.setdefault('sdk_moniker', 'azuremachinelearningdesignerserviceclient/{}'.format(VERSION))
self._configure(**kwargs)
def _configure(
self,
**kwargs # type: Any
):
# type: (...) -> None
self.user_agent_policy = kwargs.get('user_agent_policy') or policies.UserAgentPolicy(**kwargs)
self.headers_policy = kwargs.get('headers_policy') or policies.HeadersPolicy(**kwargs)
self.proxy_policy = kwargs.get('proxy_policy') or policies.ProxyPolicy(**kwargs)
self.logging_policy = kwargs.get('logging_policy') or policies.NetworkTraceLoggingPolicy(**kwargs)
self.http_logging_policy = kwargs.get('http_logging_policy') or policies.HttpLoggingPolicy(**kwargs)
self.retry_policy = kwargs.get('retry_policy') or policies.RetryPolicy(**kwargs)
self.custom_hook_policy = kwargs.get('custom_hook_policy') or policies.CustomHookPolicy(**kwargs)
self.redirect_policy = kwargs.get('redirect_policy') or policies.RedirectPolicy(**kwargs)
self.authentication_policy = kwargs.get('authentication_policy')
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py",
"repo_id": "promptflow",
"token_count": 812
} | 18 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_create_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_delete_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="DELETE",
url=url,
headers=header_parameters,
**kwargs
)
def build_list_flow_session_pip_packages_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/pipPackages')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_poll_operation_status_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
api_version = kwargs.pop('api_version', "1.0.0") # type: Optional[str]
type = kwargs.pop('type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
"actionType": _SERIALIZER.url("action_type", action_type, 'str'),
"location": _SERIALIZER.url("location", location, 'str'),
"operationId": _SERIALIZER.url("operation_id", operation_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if api_version is not None:
query_parameters['api-version'] = _SERIALIZER.query("api_version", api_version, 'str')
if type is not None:
query_parameters['type'] = _SERIALIZER.query("type", type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_standby_pools_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
# fmt: on
class FlowSessionsOperations(object):
"""FlowSessionsOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def create_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
body=None, # type: Optional["_models.CreateFlowSessionRequest"]
**kwargs # type: Any
):
# type: (...) -> Any
"""create_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param body:
:type body: ~flow.models.CreateFlowSessionRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowSessionRequest')
else:
_json = None
request = build_create_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
content_type=content_type,
json=_json,
template_url=self.create_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def get_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.GetTrainingSessionDto"
"""get_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: GetTrainingSessionDto, or the result of cls(response)
:rtype: ~flow.models.GetTrainingSessionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.GetTrainingSessionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.get_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('GetTrainingSessionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def delete_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> Any
"""delete_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.delete_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def list_flow_session_pip_packages(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""list_flow_session_pip_packages.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_flow_session_pip_packages_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.list_flow_session_pip_packages.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_flow_session_pip_packages.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/pipPackages'} # type: ignore
@distributed_trace
def poll_operation_status(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
api_version="1.0.0", # type: Optional[str]
type=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> Any
"""poll_operation_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param action_type:
:type action_type: str or ~flow.models.SetupFlowSessionAction
:param location:
:type location: str
:param operation_id:
:type operation_id: str
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
:param type:
:type type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_poll_operation_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
action_type=action_type,
location=location,
operation_id=operation_id,
api_version=api_version,
type=type,
template_url=self.poll_operation_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
poll_operation_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}'} # type: ignore
@distributed_trace
def get_standby_pools(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.StandbyPoolProperties"]
"""get_standby_pools.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of StandbyPoolProperties, or the result of cls(response)
:rtype: list[~flow.models.StandbyPoolProperties]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.StandbyPoolProperties"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_standby_pools_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_standby_pools.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[StandbyPoolProperties]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_standby_pools.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py",
"repo_id": "promptflow",
"token_count": 10750
} | 19 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# pylint: disable=protected-access
import os
import uuid
from datetime import datetime, timedelta
from pathlib import Path
from typing import Dict, Optional, TypeVar, Union
from azure.ai.ml._artifacts._blob_storage_helper import BlobStorageClient
from azure.ai.ml._artifacts._gen2_storage_helper import Gen2StorageClient
from azure.ai.ml._azure_environments import _get_storage_endpoint_from_metadata
from azure.ai.ml._restclient.v2022_10_01.models import DatastoreType
from azure.ai.ml._scope_dependent_operations import OperationScope
from azure.ai.ml._utils._arm_id_utils import (
AMLNamedArmId,
get_resource_name_from_arm_id,
is_ARM_id_for_resource,
remove_aml_prefix,
)
from azure.ai.ml._utils._asset_utils import (
IgnoreFile,
_build_metadata_dict,
_validate_path,
get_ignore_file,
get_object_hash,
)
from azure.ai.ml._utils._storage_utils import (
AzureMLDatastorePathUri,
get_artifact_path_from_storage_url,
get_storage_client,
)
from azure.ai.ml.constants._common import SHORT_URI_FORMAT, STORAGE_ACCOUNT_URLS
from azure.ai.ml.entities import Environment
from azure.ai.ml.entities._assets._artifacts.artifact import Artifact, ArtifactStorageInfo
from azure.ai.ml.entities._credentials import AccountKeyConfiguration
from azure.ai.ml.entities._datastore._constants import WORKSPACE_BLOB_STORE
from azure.ai.ml.exceptions import ErrorTarget, ValidationException
from azure.ai.ml.operations._datastore_operations import DatastoreOperations
from azure.storage.blob import BlobSasPermissions, generate_blob_sas
from azure.storage.filedatalake import FileSasPermissions, generate_file_sas
from ..._utils.logger_utils import LoggerFactory
from ._fileshare_storeage_helper import FlowFileStorageClient
module_logger = LoggerFactory.get_logger(__name__)
def _get_datastore_name(*, datastore_name: Optional[str] = WORKSPACE_BLOB_STORE) -> str:
datastore_name = WORKSPACE_BLOB_STORE if not datastore_name else datastore_name
try:
datastore_name = get_resource_name_from_arm_id(datastore_name)
except (ValueError, AttributeError, ValidationException):
module_logger.debug("datastore_name %s is not a full arm id. Proceed with a shortened name.\n", datastore_name)
datastore_name = remove_aml_prefix(datastore_name)
if is_ARM_id_for_resource(datastore_name):
datastore_name = get_resource_name_from_arm_id(datastore_name)
return datastore_name
def get_datastore_info(operations: DatastoreOperations, name: str) -> Dict[str, str]:
"""Get datastore account, type, and auth information."""
datastore_info = {}
if name:
datastore = operations.get(name, include_secrets=True)
else:
datastore = operations.get_default(include_secrets=True)
storage_endpoint = _get_storage_endpoint_from_metadata()
credentials = datastore.credentials
datastore_info["storage_type"] = datastore.type
datastore_info["storage_account"] = datastore.account_name
datastore_info["account_url"] = STORAGE_ACCOUNT_URLS[datastore.type].format(
datastore.account_name, storage_endpoint
)
if isinstance(credentials, AccountKeyConfiguration):
datastore_info["credential"] = credentials.account_key
else:
try:
datastore_info["credential"] = credentials.sas_token
except Exception as e: # pylint: disable=broad-except
if not hasattr(credentials, "sas_token"):
datastore_info["credential"] = operations._credential
else:
raise e
if datastore.type == DatastoreType.AZURE_BLOB:
datastore_info["container_name"] = str(datastore.container_name)
elif datastore.type == DatastoreType.AZURE_DATA_LAKE_GEN2:
datastore_info["container_name"] = str(datastore.filesystem)
elif datastore.type == DatastoreType.AZURE_FILE:
datastore_info["container_name"] = str(datastore.file_share_name)
else:
raise Exception(
f"Datastore type {datastore.type} is not supported for uploads. "
f"Supported types are {DatastoreType.AZURE_BLOB} and {DatastoreType.AZURE_DATA_LAKE_GEN2}."
)
return datastore_info
def list_logs_in_datastore(ds_info: Dict[str, str], prefix: str, legacy_log_folder_name: str) -> Dict[str, str]:
"""Returns a dictionary of file name to blob or data lake uri with SAS token, matching the structure of
RunDetails.logFiles.
legacy_log_folder_name: the name of the folder in the datastore that contains the logs
/azureml-logs/*.txt is the legacy log structure for commandJob and sweepJob
/logs/azureml/*.txt is the legacy log structure for pipeline parent Job
"""
if ds_info["storage_type"] not in [
DatastoreType.AZURE_BLOB,
DatastoreType.AZURE_DATA_LAKE_GEN2,
]:
raise Exception("Only Blob and Azure DataLake Storage Gen2 datastores are supported.")
storage_client = get_storage_client(
credential=ds_info["credential"],
container_name=ds_info["container_name"],
storage_account=ds_info["storage_account"],
storage_type=ds_info["storage_type"],
)
items = storage_client.list(starts_with=prefix + "/user_logs/")
# Append legacy log files if present
items.extend(storage_client.list(starts_with=prefix + legacy_log_folder_name))
log_dict = {}
for item_name in items:
sub_name = item_name.split(prefix + "/")[1]
if isinstance(storage_client, BlobStorageClient):
token = generate_blob_sas(
account_name=ds_info["storage_account"],
container_name=ds_info["container_name"],
blob_name=item_name,
account_key=ds_info["credential"],
permission=BlobSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(minutes=30),
)
elif isinstance(storage_client, Gen2StorageClient):
token = generate_file_sas( # pylint: disable=no-value-for-parameter
account_name=ds_info["storage_account"],
file_system_name=ds_info["container_name"],
file_name=item_name,
credential=ds_info["credential"],
permission=FileSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(minutes=30),
)
log_dict[sub_name] = "{}/{}/{}?{}".format(ds_info["account_url"], ds_info["container_name"], item_name, token)
return log_dict
def _get_default_datastore_info(datastore_operation):
return get_datastore_info(datastore_operation, None)
def upload_artifact(
local_path: str,
datastore_operation: DatastoreOperations,
operation_scope: OperationScope,
datastore_name: Optional[str],
asset_hash: Optional[str] = None,
show_progress: bool = True,
asset_name: Optional[str] = None,
asset_version: Optional[str] = None,
ignore_file: IgnoreFile = IgnoreFile(None),
sas_uri=None,
) -> ArtifactStorageInfo:
"""Upload local file or directory to datastore."""
if sas_uri:
storage_client = get_storage_client(credential=None, storage_account=None, account_url=sas_uri)
else:
datastore_name = _get_datastore_name(datastore_name=datastore_name)
datastore_info = get_datastore_info(datastore_operation, datastore_name)
storage_client = FlowFileStorageClient(
credential=datastore_info["credential"],
file_share_name=datastore_info["container_name"],
account_url=datastore_info["account_url"],
azure_cred=datastore_operation._credential,
)
artifact_info = storage_client.upload(
local_path,
asset_hash=asset_hash,
show_progress=show_progress,
name=asset_name,
version=asset_version,
ignore_file=ignore_file,
)
artifact_info["remote path"] = os.path.join(
storage_client.directory_client.directory_path, artifact_info["remote path"]
)
return artifact_info
def download_artifact(
starts_with: Union[str, os.PathLike],
destination: str,
datastore_operation: DatastoreOperations,
datastore_name: Optional[str],
datastore_info: Optional[Dict] = None,
) -> str:
"""Download datastore path to local file or directory.
:param Union[str, os.PathLike] starts_with: Prefix of blobs to download
:param str destination: Path that files will be written to
:param DatastoreOperations datastore_operation: Datastore operations
:param Optional[str] datastore_name: name of datastore
:param Dict datastore_info: the return value of invoking get_datastore_info
:return str: Path that files were written to
"""
starts_with = starts_with.as_posix() if isinstance(starts_with, Path) else starts_with
datastore_name = _get_datastore_name(datastore_name=datastore_name)
if datastore_info is None:
datastore_info = get_datastore_info(datastore_operation, datastore_name)
storage_client = get_storage_client(**datastore_info)
storage_client.download(starts_with=starts_with, destination=destination)
return destination
def download_artifact_from_storage_url(
blob_url: str,
destination: str,
datastore_operation: DatastoreOperations,
datastore_name: Optional[str],
) -> str:
"""Download datastore blob URL to local file or directory."""
datastore_name = _get_datastore_name(datastore_name=datastore_name)
datastore_info = get_datastore_info(datastore_operation, datastore_name)
starts_with = get_artifact_path_from_storage_url(
blob_url=str(blob_url), container_name=datastore_info.get("container_name")
)
return download_artifact(
starts_with=starts_with,
destination=destination,
datastore_operation=datastore_operation,
datastore_name=datastore_name,
datastore_info=datastore_info,
)
def download_artifact_from_aml_uri(uri: str, destination: str, datastore_operation: DatastoreOperations):
"""Downloads artifact pointed to by URI of the form `azureml://...` to destination.
:param str uri: AzureML uri of artifact to download
:param str destination: Path to download artifact to
:param DatastoreOperations datastore_operation: datastore operations
:return str: Path that files were downloaded to
"""
parsed_uri = AzureMLDatastorePathUri(uri)
return download_artifact(
starts_with=parsed_uri.path,
destination=destination,
datastore_operation=datastore_operation,
datastore_name=parsed_uri.datastore,
)
def aml_datastore_path_exists(
uri: str, datastore_operation: DatastoreOperations, datastore_info: Optional[dict] = None
):
"""Checks whether `uri` of the form "azureml://" points to either a directory or a file.
:param str uri: azure ml datastore uri
:param DatastoreOperations datastore_operation: Datastore operation
:param dict datastore_info: return value of get_datastore_info
"""
parsed_uri = AzureMLDatastorePathUri(uri)
datastore_info = datastore_info or get_datastore_info(datastore_operation, parsed_uri.datastore)
return get_storage_client(**datastore_info).exists(parsed_uri.path)
def _upload_to_datastore(
operation_scope: OperationScope,
datastore_operation: DatastoreOperations,
path: Union[str, Path, os.PathLike],
artifact_type: str,
datastore_name: Optional[str] = None,
show_progress: bool = True,
asset_name: Optional[str] = None,
asset_version: Optional[str] = None,
asset_hash: Optional[str] = None,
ignore_file: Optional[IgnoreFile] = None,
sas_uri: Optional[str] = None, # contains registry sas url
) -> ArtifactStorageInfo:
_validate_path(path, _type=artifact_type)
if not ignore_file:
ignore_file = get_ignore_file(path)
if not asset_hash:
asset_hash = get_object_hash(path, ignore_file)
artifact = upload_artifact(
str(path),
datastore_operation,
operation_scope,
datastore_name,
show_progress=show_progress,
asset_hash=asset_hash,
asset_name=asset_name,
asset_version=asset_version,
ignore_file=ignore_file,
sas_uri=sas_uri,
)
return artifact
def _upload_and_generate_remote_uri(
operation_scope: OperationScope,
datastore_operation: DatastoreOperations,
path: Union[str, Path, os.PathLike],
artifact_type: str = ErrorTarget.ARTIFACT,
datastore_name: str = WORKSPACE_BLOB_STORE,
show_progress: bool = True,
) -> str:
# Asset name is required for uploading to a datastore
asset_name = str(uuid.uuid4())
artifact_info = _upload_to_datastore(
operation_scope=operation_scope,
datastore_operation=datastore_operation,
path=path,
datastore_name=datastore_name,
asset_name=asset_name,
artifact_type=artifact_type,
show_progress=show_progress,
)
path = artifact_info.relative_path
datastore = AMLNamedArmId(artifact_info.datastore_arm_id).asset_name
return SHORT_URI_FORMAT.format(datastore, path)
def _update_metadata(name, version, indicator_file, datastore_info) -> None:
storage_client = get_storage_client(**datastore_info)
if isinstance(storage_client, BlobStorageClient):
_update_blob_metadata(name, version, indicator_file, storage_client)
elif isinstance(storage_client, Gen2StorageClient):
_update_gen2_metadata(name, version, indicator_file, storage_client)
def _update_blob_metadata(name, version, indicator_file, storage_client) -> None:
container_client = storage_client.container_client
if indicator_file.startswith(storage_client.container):
indicator_file = indicator_file.split(storage_client.container)[1]
blob = container_client.get_blob_client(blob=indicator_file)
blob.set_blob_metadata(_build_metadata_dict(name=name, version=version))
def _update_gen2_metadata(name, version, indicator_file, storage_client) -> None:
artifact_directory_client = storage_client.file_system_client.get_directory_client(indicator_file)
artifact_directory_client.set_metadata(_build_metadata_dict(name=name, version=version))
T = TypeVar("T", bound=Artifact)
def _check_and_upload_path(
artifact: T,
asset_operations: Union["DataOperations", "ModelOperations", "CodeOperations", "FeatureSetOperations"],
artifact_type: str,
datastore_name: Optional[str] = None,
sas_uri: Optional[str] = None,
show_progress: bool = True,
):
"""Checks whether `artifact` is a path or a uri and uploads it to the datastore if necessary.
param T artifact: artifact to check and upload param
Union["DataOperations", "ModelOperations", "CodeOperations"]
asset_operations: the asset operations to use for uploading
param str datastore_name: the name of the datastore to upload to
param str sas_uri: the sas uri to use for uploading
"""
from azure.ai.ml._utils.utils import is_mlflow_uri, is_url
datastore_name = artifact.datastore
if (
hasattr(artifact, "local_path")
and artifact.local_path is not None
or (
hasattr(artifact, "path")
and artifact.path is not None
and not (is_url(artifact.path) or is_mlflow_uri(artifact.path))
)
):
path = (
Path(artifact.path)
if hasattr(artifact, "path") and artifact.path is not None
else Path(artifact.local_path)
)
if not path.is_absolute():
path = Path(artifact.base_path, path).resolve()
uploaded_artifact = _upload_to_datastore(
asset_operations._operation_scope,
asset_operations._datastore_operation,
path,
datastore_name=datastore_name,
asset_name=artifact.name,
asset_version=str(artifact.version),
asset_hash=artifact._upload_hash if hasattr(artifact, "_upload_hash") else None,
sas_uri=sas_uri,
artifact_type=artifact_type,
show_progress=show_progress,
ignore_file=getattr(artifact, "_ignore_file", None),
)
return uploaded_artifact
def _check_and_upload_env_build_context(
environment: Environment,
operations: "EnvironmentOperations",
sas_uri=None,
show_progress: bool = True,
) -> Environment:
if environment.path:
uploaded_artifact = _upload_to_datastore(
operations._operation_scope,
operations._datastore_operation,
environment.path,
asset_name=environment.name,
asset_version=str(environment.version),
asset_hash=environment._upload_hash,
sas_uri=sas_uri,
artifact_type=ErrorTarget.ENVIRONMENT,
datastore_name=environment.datastore,
show_progress=show_progress,
)
# TODO: Depending on decision trailing "/" needs to stay or not. EMS requires it to be present
environment.build.path = uploaded_artifact.full_storage_path + "/"
return environment
| promptflow/src/promptflow/promptflow/azure/operations/_artifact_utilities.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/operations/_artifact_utilities.py",
"repo_id": "promptflow",
"token_count": 6848
} | 20 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from dataclasses import dataclass
from datetime import datetime
from itertools import chain
from typing import Any, List, Mapping
from promptflow._utils.exception_utils import ExceptionPresenter, RootErrorCode
from promptflow._utils.openai_metrics_calculator import OpenAIMetricsCalculator
from promptflow.contracts.run_info import RunInfo, Status
from promptflow.executor._result import AggregationResult, LineResult
@dataclass
class LineError:
"""The error of a line in a batch run.
It contains the line number and the error dict of a failed line in the batch run.
The error dict is gengerated by ExceptionPresenter.to_dict().
"""
line_number: int
error: Mapping[str, Any]
def to_dict(self):
return {
"line_number": self.line_number,
"error": self.error,
}
@dataclass
class ErrorSummary:
"""The summary of errors in a batch run.
:param failed_user_error_lines: The number of lines that failed with user error.
:type failed_user_error_lines: int
:param failed_system_error_lines: The number of lines that failed with system error.
:type failed_system_error_lines: int
:param error_list: The line number and error dict of failed lines in the line results.
:type error_list: List[~promptflow.batch._result.LineError]
:param aggr_error_dict: The dict of node name and error dict of failed nodes in the aggregation result.
:type aggr_error_dict: Mapping[str, Any]
:param batch_error_dict: The dict of batch run error.
:type batch_error_dict: Mapping[str, Any]
"""
failed_user_error_lines: int
failed_system_error_lines: int
error_list: List[LineError]
aggr_error_dict: Mapping[str, Any]
batch_error_dict: Mapping[str, Any]
@staticmethod
def create(line_results: List[LineResult], aggr_result: AggregationResult, exception: Exception = None):
failed_user_error_lines = 0
failed_system_error_lines = 0
error_list: List[LineError] = []
for line_result in line_results:
if line_result.run_info.status != Status.Failed:
continue
flow_run = line_result.run_info
if flow_run.error.get("code", "") == RootErrorCode.USER_ERROR:
failed_user_error_lines += 1
else:
failed_system_error_lines += 1
line_error = LineError(
line_number=flow_run.index,
error=flow_run.error,
)
error_list.append(line_error)
error_summary = ErrorSummary(
failed_user_error_lines=failed_user_error_lines,
failed_system_error_lines=failed_system_error_lines,
error_list=sorted(error_list, key=lambda x: x.line_number),
aggr_error_dict={
node_name: node_run_info.error
for node_name, node_run_info in aggr_result.node_run_infos.items()
if node_run_info.status == Status.Failed
},
batch_error_dict=ExceptionPresenter.create(exception).to_dict() if exception else None,
)
return error_summary
@dataclass
class SystemMetrics:
"""The system metrics of a batch run."""
total_tokens: int
prompt_tokens: int
completion_tokens: int
duration: float # in seconds
@staticmethod
def create(
start_time: datetime, end_time: datetime, line_results: List[LineResult], aggr_results: AggregationResult
):
openai_metrics = SystemMetrics._get_openai_metrics(line_results, aggr_results)
return SystemMetrics(
total_tokens=openai_metrics.get("total_tokens", 0),
prompt_tokens=openai_metrics.get("prompt_tokens", 0),
completion_tokens=openai_metrics.get("completion_tokens", 0),
duration=(end_time - start_time).total_seconds(),
)
@staticmethod
def _get_openai_metrics(line_results: List[LineResult], aggr_results: AggregationResult):
node_run_infos = _get_node_run_infos(line_results, aggr_results)
total_metrics = {}
calculator = OpenAIMetricsCalculator()
for run_info in node_run_infos:
metrics = SystemMetrics._try_get_openai_metrics(run_info)
if metrics:
calculator.merge_metrics_dict(total_metrics, metrics)
else:
api_calls = run_info.api_calls or []
for call in api_calls:
metrics = calculator.get_openai_metrics_from_api_call(call)
calculator.merge_metrics_dict(total_metrics, metrics)
return total_metrics
def _try_get_openai_metrics(run_info: RunInfo):
openai_metrics = {}
if run_info.system_metrics:
for metric in ["total_tokens", "prompt_tokens", "completion_tokens"]:
if metric not in run_info.system_metrics:
return False
openai_metrics[metric] = run_info.system_metrics[metric]
return openai_metrics
def to_dict(self):
return {
"total_tokens": self.total_tokens,
"prompt_tokens": self.prompt_tokens,
"completion_tokens": self.completion_tokens,
"duration": self.duration,
}
@dataclass
class BatchResult:
"""The result of a batch run."""
status: Status
total_lines: int
completed_lines: int
failed_lines: int
node_status: Mapping[str, int]
start_time: datetime
end_time: datetime
metrics: Mapping[str, str]
system_metrics: SystemMetrics
error_summary: ErrorSummary
@classmethod
def create(
cls,
start_time: datetime,
end_time: datetime,
line_results: List[LineResult],
aggr_result: AggregationResult,
status: Status = Status.Completed,
exception: Exception = None,
) -> "BatchResult":
total_lines = len(line_results)
completed_lines = sum(line_result.run_info.status == Status.Completed for line_result in line_results)
failed_lines = total_lines - completed_lines
if exception:
status = Status.Failed
return cls(
status=status,
total_lines=total_lines,
completed_lines=completed_lines,
failed_lines=failed_lines,
node_status=BatchResult._get_node_status(line_results, aggr_result),
start_time=start_time,
end_time=end_time,
metrics=aggr_result.metrics,
system_metrics=SystemMetrics.create(start_time, end_time, line_results, aggr_result),
error_summary=ErrorSummary.create(line_results, aggr_result, exception),
)
@staticmethod
def _get_node_status(line_results: List[LineResult], aggr_result: AggregationResult):
node_run_infos = _get_node_run_infos(line_results, aggr_result)
node_status = {}
for node_run_info in node_run_infos:
key = f"{node_run_info.node}.{node_run_info.status.value.lower()}"
node_status[key] = node_status.get(key, 0) + 1
return node_status
def _get_node_run_infos(line_results: List[LineResult], aggr_result: AggregationResult):
line_node_run_infos = (
node_run_info for line_result in line_results for node_run_info in line_result.node_run_infos.values()
)
aggr_node_run_infos = (node_run_info for node_run_info in aggr_result.node_run_infos.values())
return chain(line_node_run_infos, aggr_node_run_infos)
| promptflow/src/promptflow/promptflow/batch/_result.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/batch/_result.py",
"repo_id": "promptflow",
"token_count": 3273
} | 21 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import contextvars
import inspect
import os
import signal
import threading
import time
import traceback
from asyncio import Task
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Tuple
from promptflow._core.flow_execution_context import FlowExecutionContext
from promptflow._core.tools_manager import ToolsManager
from promptflow._utils.logger_utils import flow_logger
from promptflow._utils.utils import extract_user_frame_summaries, set_context
from promptflow.contracts.flow import Node
from promptflow.executor._dag_manager import DAGManager
from promptflow.executor._errors import NoNodeExecutedError
PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME = "_pf_async_nodes_scheduler.execute"
DEFAULT_TASK_LOGGING_INTERVAL = 60
ASYNC_DAG_MANAGER_COMPLETED = False
class AsyncNodesScheduler:
def __init__(
self,
tools_manager: ToolsManager,
node_concurrency: int,
) -> None:
self._tools_manager = tools_manager
self._node_concurrency = node_concurrency
self._task_start_time = {}
self._task_last_log_time = {}
self._dag_manager_completed_event = threading.Event()
async def execute(
self,
nodes: List[Node],
inputs: Dict[str, Any],
context: FlowExecutionContext,
) -> Tuple[dict, dict]:
# TODO: Provide cancel API
if threading.current_thread() is threading.main_thread():
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
else:
flow_logger.info(
"Current thread is not main thread, skip signal handler registration in AsyncNodesScheduler."
)
# Semaphore should be created in the loop, otherwise it will not work.
loop = asyncio.get_running_loop()
self._semaphore = asyncio.Semaphore(self._node_concurrency)
monitor = threading.Thread(
target=monitor_long_running_coroutine,
args=(loop, self._task_start_time, self._task_last_log_time, self._dag_manager_completed_event),
daemon=True,
)
monitor.start()
# Set the name of scheduler tasks to avoid monitoring its duration
task = asyncio.current_task()
task.set_name(PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME)
parent_context = contextvars.copy_context()
executor = ThreadPoolExecutor(
max_workers=self._node_concurrency, initializer=set_context, initargs=(parent_context,)
)
# Note that we must not use `with` statement to manage the executor.
# This is because it will always call `executor.shutdown()` when exiting the `with` block.
# Then the event loop will wait for all tasks to be completed before raising the cancellation error.
# See reference: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor
outputs = await self._execute_with_thread_pool(executor, nodes, inputs, context)
executor.shutdown()
return outputs
async def _execute_with_thread_pool(
self,
executor: ThreadPoolExecutor,
nodes: List[Node],
inputs: Dict[str, Any],
context: FlowExecutionContext,
) -> Tuple[dict, dict]:
flow_logger.info(f"Start to run {len(nodes)} nodes with the current event loop.")
dag_manager = DAGManager(nodes, inputs)
task2nodes = self._execute_nodes(dag_manager, context, executor)
while not dag_manager.completed():
task2nodes = await self._wait_and_complete_nodes(task2nodes, dag_manager)
submitted_tasks2nodes = self._execute_nodes(dag_manager, context, executor)
task2nodes.update(submitted_tasks2nodes)
# Set the event to notify the monitor thread to exit
# Ref: https://docs.python.org/3/library/threading.html#event-objects
self._dag_manager_completed_event.set()
for node in dag_manager.bypassed_nodes:
dag_manager.completed_nodes_outputs[node] = None
return dag_manager.completed_nodes_outputs, dag_manager.bypassed_nodes
async def _wait_and_complete_nodes(self, task2nodes: Dict[Task, Node], dag_manager: DAGManager) -> Dict[Task, Node]:
if not task2nodes:
raise NoNodeExecutedError("No nodes are ready for execution, but the flow is not completed.")
tasks = [task for task in task2nodes]
for task in tasks:
self._task_start_time[task] = time.time()
done, _ = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
dag_manager.complete_nodes({task2nodes[task].name: task.result() for task in done})
for task in done:
del task2nodes[task]
return task2nodes
def _execute_nodes(
self,
dag_manager: DAGManager,
context: FlowExecutionContext,
executor: ThreadPoolExecutor,
) -> Dict[Task, Node]:
# Bypass nodes and update node run info until there are no nodes to bypass
nodes_to_bypass = dag_manager.pop_bypassable_nodes()
while nodes_to_bypass:
for node in nodes_to_bypass:
context.bypass_node(node)
nodes_to_bypass = dag_manager.pop_bypassable_nodes()
# Create tasks for ready nodes
return {
self._create_node_task(node, dag_manager, context, executor): node for node in dag_manager.pop_ready_nodes()
}
async def run_task_with_semaphore(self, coroutine):
async with self._semaphore:
return await coroutine
def _create_node_task(
self,
node: Node,
dag_manager: DAGManager,
context: FlowExecutionContext,
executor: ThreadPoolExecutor,
) -> Task:
f = self._tools_manager.get_tool(node.name)
kwargs = dag_manager.get_node_valid_inputs(node, f)
if inspect.iscoroutinefunction(f):
# For async task, it will not be executed before calling create_task.
task = context.invoke_tool_async(node, f, kwargs)
else:
# For sync task, convert it to async task and run it in executor thread.
# Even though the task is put to the thread pool, thread.start will only be triggered after create_task.
task = self._sync_function_to_async_task(executor, context, node, f, kwargs)
# Set the name of the task to the node name for debugging purpose
# It does not need to be unique by design.
# Wrap the coroutine in a task with asyncio.create_task to schedule it for event loop execution
# The task is created and added to the event loop, but the exact execution depends on loop's scheduling
return asyncio.create_task(self.run_task_with_semaphore(task), name=node.name)
@staticmethod
async def _sync_function_to_async_task(
executor: ThreadPoolExecutor,
context: FlowExecutionContext,
node,
f,
kwargs,
):
# The task will not be executed before calling create_task.
return await asyncio.get_running_loop().run_in_executor(executor, context.invoke_tool, node, f, kwargs)
def signal_handler(sig, frame):
"""
Start a thread to monitor coroutines after receiving signal.
"""
flow_logger.info(f"Received signal {sig}({signal.Signals(sig).name}), start coroutine monitor thread.")
loop = asyncio.get_running_loop()
monitor = threading.Thread(target=monitor_coroutine_after_cancellation, args=(loop,))
monitor.start()
raise KeyboardInterrupt
def log_stack_recursively(task: asyncio.Task, elapse_time: float):
"""Recursively log the frame of a task or coroutine.
Traditional stacktrace would stop at the first awaited nested inside the coroutine.
:param task: Task to log
:type task_or_coroutine: asyncio.Task
:param elapse_time: Seconds elapsed since the task started
:type elapse_time: float
"""
# We cannot use task.get_stack() to get the stack, because only one stack frame is
# returned for a suspended coroutine because of the implementation of CPython
# Ref: https://github.com/python/cpython/blob/main/Lib/asyncio/tasks.py
# "only one stack frame is returned for a suspended coroutine."
task_or_coroutine = task
frame_summaries = []
# Collect frame_summaries along async call chain
while True:
if isinstance(task_or_coroutine, asyncio.Task):
# For a task, get the coroutine it's running
coroutine: asyncio.coroutine = task_or_coroutine.get_coro()
elif asyncio.iscoroutine(task_or_coroutine):
coroutine = task_or_coroutine
else:
break
frame = coroutine.cr_frame
stack_summary: traceback.StackSummary = traceback.extract_stack(frame)
frame_summaries.extend(stack_summary)
task_or_coroutine = coroutine.cr_await
# Format the frame summaries to warning message
if frame_summaries:
user_frame_summaries = extract_user_frame_summaries(frame_summaries)
stack_messages = traceback.format_list(user_frame_summaries)
all_stack_message = "".join(stack_messages)
task_msg = (
f"Task {task.get_name()} has been running for {elapse_time:.0f} seconds,"
f" stacktrace:\n{all_stack_message}"
)
flow_logger.warning(task_msg)
def monitor_long_running_coroutine(
loop: asyncio.AbstractEventLoop,
task_start_time: dict,
task_last_log_time: dict,
dag_manager_completed_event: threading.Event,
):
flow_logger.info("monitor_long_running_coroutine started")
logging_interval = DEFAULT_TASK_LOGGING_INTERVAL
logging_interval_in_env = os.environ.get("PF_TASK_PEEKING_INTERVAL")
if logging_interval_in_env:
try:
value = int(logging_interval_in_env)
if value <= 0:
raise ValueError
logging_interval = value
flow_logger.info(
f"Using value of PF_TASK_PEEKING_INTERVAL in environment variable as "
f"logging interval: {logging_interval_in_env}"
)
except ValueError:
flow_logger.warning(
f"Value of PF_TASK_PEEKING_INTERVAL in environment variable ('{logging_interval_in_env}') "
f"is invalid, use default value {DEFAULT_TASK_LOGGING_INTERVAL}"
)
while not dag_manager_completed_event.is_set():
running_tasks = [task for task in asyncio.all_tasks(loop) if not task.done()]
# get duration of running tasks
for task in running_tasks:
# Do not monitor the scheduler task
if task.get_name() == PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME:
continue
# Do not monitor sync tools, since they will run in executor thread and will
# be monitored by RepeatLogTimer.
task_stacks = task.get_stack()
if (
task_stacks
and task_stacks[-1].f_code
and task_stacks[-1].f_code.co_name == AsyncNodesScheduler._sync_function_to_async_task.__name__
):
continue
if task_start_time.get(task) is None:
flow_logger.warning(f"task {task.get_name()} has no start time, which should not happen")
else:
duration = time.time() - task_start_time[task]
if duration > logging_interval:
if (
task_last_log_time.get(task) is None
or time.time() - task_last_log_time[task] > logging_interval
):
log_stack_recursively(task, duration)
task_last_log_time[task] = time.time()
time.sleep(1)
def monitor_coroutine_after_cancellation(loop: asyncio.AbstractEventLoop):
"""Exit the process when all coroutines are done.
We add this function because if a sync tool is running in async mode,
the task will be cancelled after receiving SIGINT,
but the thread will not be terminated and blocks the program from exiting.
:param loop: event loop of main thread
:type loop: asyncio.AbstractEventLoop
"""
# TODO: Use environment variable to ensure it is flow test scenario to avoid unexpected exit.
# E.g. Customer is integrating Promptflow in their own code, and they want to handle SIGINT by themselves.
max_wait_seconds = os.environ.get("PF_WAIT_SECONDS_AFTER_CANCELLATION", 30)
all_tasks_are_done = False
exceeded_wait_seconds = False
thread_start_time = time.time()
flow_logger.info(f"Start to monitor coroutines after cancellation, max wait seconds: {max_wait_seconds}s")
while not all_tasks_are_done and not exceeded_wait_seconds:
# For sync tool running in async mode, the task will be cancelled,
# but the thread will not be terminated, we exit the program despite of it.
# TODO: Detect whether there is any sync tool running in async mode,
# if there is none, avoid sys.exit and let the program exit gracefully.
all_tasks_are_done = all(task.done() for task in asyncio.all_tasks(loop))
if all_tasks_are_done:
flow_logger.info("All coroutines are done. Exiting.")
# We cannot ensure persist_flow_run is called before the process exits in the case that there is
# non-daemon thread running, sleep for 3 seconds as a best effort.
# If the caller wants to ensure flow status is cancelled in storage, it should check the flow status
# after timeout and set the flow status to Cancelled.
time.sleep(3)
# Use os._exit instead of sys.exit, so that the process can stop without
# waiting for the thread created by run_in_executor to finish.
# sys.exit: https://docs.python.org/3/library/sys.html#sys.exit
# Raise a SystemExit exception, signaling an intention to exit the interpreter.
# Specifically, it does not exit non-daemon thread
# os._exit https://docs.python.org/3/library/os.html#os._exit
# Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
# Specifically, it stops process without waiting for non-daemon thread.
os._exit(0)
exceeded_wait_seconds = time.time() - thread_start_time > max_wait_seconds
time.sleep(1)
if exceeded_wait_seconds:
if not all_tasks_are_done:
flow_logger.info(
f"Not all coroutines are done within {max_wait_seconds}s"
" after cancellation. Exiting the process despite of them."
" Please config the environment variable"
" PF_WAIT_SECONDS_AFTER_CANCELLATION if your tool needs"
" more time to clean up after cancellation."
)
remaining_tasks = [task for task in asyncio.all_tasks(loop) if not task.done()]
flow_logger.info(f"Remaining tasks: {[task.get_name() for task in remaining_tasks]}")
time.sleep(3)
os._exit(0)
| promptflow/src/promptflow/promptflow/executor/_async_nodes_scheduler.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_async_nodes_scheduler.py",
"repo_id": "promptflow",
"token_count": 6253
} | 22 |
[tool.black]
line-length = 120
[tool.pytest.ini_options]
markers = [
"sdk_test",
"cli_test",
"unittest",
"e2etest",
"flaky",
"endpointtest",
"mt_endpointtest",
]
[tool.coverage.run]
omit = [
# omit anything in a _restclient directory anywhere
"*/_restclient/*",
]
| promptflow/src/promptflow/pyproject.toml/0 | {
"file_path": "promptflow/src/promptflow/pyproject.toml",
"repo_id": "promptflow",
"token_count": 139
} | 23 |
{
"version": "0.2",
"language": "en",
"languageId": "python",
"dictionaries": [
"powershell",
"python",
"go",
"css",
"html",
"bash",
"npm",
"softwareTerms",
"en_us",
"en-gb"
],
"ignorePaths": [
"**/*.js",
"**/*.pyc",
"**/*.log",
"**/*.jsonl",
"**/*.xml",
"**/*.txt",
".gitignore",
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/promptflow/azure/_restclient/swagger.json",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py",
"scripts/installer/curl_install_pypi/**",
"scripts/installer/windows/**",
"src/promptflow/promptflow/_sdk/_service/pfsvc.py"
],
"words": [
"aoai",
"amlignore",
"mldesigner",
"faiss",
"serp",
"azureml",
"mlflow",
"vnet",
"openai",
"pfazure",
"eastus",
"azureai",
"vectordb",
"Qdrant",
"Weaviate",
"env",
"e2etests",
"e2etest",
"tablefmt",
"logprobs",
"logit",
"hnsw",
"chatml",
"UNLCK",
"KHTML",
"numlines",
"azurecr",
"centralus",
"Policheck",
"azuremlsdktestpypi",
"rediraffe",
"pydata",
"ROBOCOPY",
"undoc",
"retriable",
"pfcli",
"pfutil",
"mgmt",
"wsid",
"westus",
"msrest",
"cref",
"msal",
"pfbytes",
"Apim",
"junit",
"nunit",
"astext",
"Likert",
"pfsvc"
],
"ignoreWords": [
"openmpi",
"ipynb",
"xdist",
"pydash",
"tqdm",
"rtype",
"epocs",
"fout",
"funcs",
"todos",
"fstring",
"creds",
"zipp",
"gmtime",
"pyjwt",
"nbconvert",
"nbformat",
"pypandoc",
"dotenv",
"miniconda",
"datas",
"tcgetpgrp",
"yamls",
"fmt",
"serpapi",
"genutils",
"metadatas",
"tiktoken",
"bfnrt",
"orelse",
"thead",
"sympy",
"ghactions",
"esac",
"MSRC",
"pycln",
"strictyaml",
"psutil",
"getch",
"tcgetattr",
"TCSADRAIN",
"stringio",
"jsonify",
"werkzeug",
"continuumio",
"pydantic",
"iterrows",
"dtype",
"fillna",
"nlines",
"aggr",
"tcsetattr",
"pysqlite",
"AADSTS700082",
"Pyinstaller",
"runsvdir",
"runsv",
"levelno",
"LANCZOS",
"Mobius",
"ruamel",
"gunicorn",
"pkill",
"pgrep",
"Hwfoxydrg",
"llms",
"vcrpy",
"uionly",
"llmops",
"Abhishek",
"restx",
"httpx",
"tiiuae",
"nohup",
"metagenai",
"WBITS",
"laddr",
"nrows",
"Dumpable",
"XCLASS",
"otel",
"OTLP",
"spawnv",
"spawnve",
"addrs"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
| promptflow/.cspell.json/0 | {
"file_path": "promptflow/.cspell.json",
"repo_id": "promptflow",
"token_count": 1604
} | 0 |
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
| promptflow/SECURITY.md/0 | {
"file_path": "promptflow/SECURITY.md",
"repo_id": "promptflow",
"token_count": 674
} | 1 |
# Adding a Tool Icon
A tool icon serves as a graphical representation of your tool in the user interface (UI). Follow this guidance to add a custom tool icon when developing your own tool package.
Adding a custom tool icon is optional. If you do not provide one, the system uses a default icon.
## Prerequisites
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.4.2 or later.
- Create a tool package as described in [Create and Use Tool Package](create-and-use-tool-package.md).
- Prepare custom icon image that meets these requirements:
- Use PNG, JPG or BMP format.
- 16x16 pixels to prevent distortion when resizing.
- Avoid complex images with lots of detail or contrast, as they may not resize well.
See [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/icons/custom-tool-icon.png) as a reference.
- Install dependencies to generate icon data URI:
```
pip install pillow
```
## Add tool icon with _icon_ parameter
Run the command below in your tool project directory to automatically generate your tool YAML, use _-i_ or _--icon_ parameter to add a custom tool icon:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> -i <tool-icon-path>
```
Here we use [an existing tool project](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart) as an example.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml -i my_tool_package\icons\custom-tool-icon.png
```
In the auto-generated tool YAML file, the custom tool icon data URI is added in the `icon` field:
```yaml
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
icon: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACR0lEQVR4nKWS3UuTcRTHP79nm9ujM+fccqFGI5viRRpjJgkJ3hiCENVN/QMWdBHUVRdBNwX9ARHd2FVEWFLRjaS9XPmSC/EFTNOWc3Pi48y9PHNzz68L7UXTCvreHM65+PA953uElFLyHzLvHMwsJrnzfJqFeAan3cKV9mr8XseeAOXX5vqjSS53jdF+tIz1nIFAMDCzwpvJ5b87+LSYYHw+gcWkEAwluXnOR2Q1R+9YjJ7BKJG4zoXmqr0ddL3+QnV5EeUOK821LsJammcjEeZiafJScrd3bm8H6zkDd4mVztZKAK49/Mj8is4Z/35GPq9R5VJ5GYztDtB1HT1vovGQSiqVAqDugI3I6jpP3i9x9VQVfu8+1N/OvbWCqqqoBSa6h1fQNA1N0xiYTWJSBCZF8HgwSjQapbRQ2RUg5NYj3O6ZochmYkFL03S4mImIzjFvCf2xS5gtCRYXWvBUvKXjyEVeTN/DXuDgxsnuzSMK4HTAw1Q0hZba4NXEKp0tbpq9VkxCwTAETrsVwxBIBIYhMPI7YqyrtONQzSznJXrO4H5/GJ9LUGg0YFYydJxoYnwpj1s9SEN5KzZz4fYYAW6dr+VsowdFgamlPE/Hs8SzQZYzg0S+zjIc6iOWDDEc6uND+N12B9/VVu+mrd79o38wFCCdTeBSK6hxBii1eahxBlAtRbsDdmoiHGRNj1NZ7GM0NISvzM9oaIhiqwOO/wMgl4FsRpLf2KxGXpLNSLLInzH+CWBIA6RECIGUEiEUpDRACBSh8A3pXfGWdXfMgAAAAABJRU5ErkJggg==
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: my_tool
type: python
```
## Verify the tool icon in VS Code extension
Follow [steps](create-and-use-tool-package.md#use-your-tool-from-vscode-extension) to use your tool from VS Code extension. Your tool displays with the custom icon:
![custom-tool-with-icon-in-extension](../../media/how-to-guides/develop-a-tool/custom-tool-with-icon-in-extension.png)
## FAQ
### Can I preview the tool icon image before adding it to a tool?
Yes, you could run below command under the root folder to generate a data URI for your custom tool icon. Make sure the output file has an `.html` extension.
```
python <path-to-scripts>\tool\convert_image_to_data_url.py --image-path <image_input_path> -o <html_output_path>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\convert_image_to_data_url.py --image-path D:\proj\github\promptflow\examples\tools\tool-package-quickstart\my_tool_package\icons\custom-tool-icon.png -o output.html
```
The content of `output.html` looks like the following, open it in a web browser to preview the icon.
```html
<html>
<body>
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACR0lEQVR4nKWS3UuTcRTHP79nm9ujM+fccqFGI5viRRpjJgkJ3hiCENVN/QMWdBHUVRdBNwX9ARHd2FVEWFLRjaS9XPmSC/EFTNOWc3Pi48y9PHNzz68L7UXTCvreHM65+PA953uElFLyHzLvHMwsJrnzfJqFeAan3cKV9mr8XseeAOXX5vqjSS53jdF+tIz1nIFAMDCzwpvJ5b87+LSYYHw+gcWkEAwluXnOR2Q1R+9YjJ7BKJG4zoXmqr0ddL3+QnV5EeUOK821LsJammcjEeZiafJScrd3bm8H6zkDd4mVztZKAK49/Mj8is4Z/35GPq9R5VJ5GYztDtB1HT1vovGQSiqVAqDugI3I6jpP3i9x9VQVfu8+1N/OvbWCqqqoBSa6h1fQNA1N0xiYTWJSBCZF8HgwSjQapbRQ2RUg5NYj3O6ZochmYkFL03S4mImIzjFvCf2xS5gtCRYXWvBUvKXjyEVeTN/DXuDgxsnuzSMK4HTAw1Q0hZba4NXEKp0tbpq9VkxCwTAETrsVwxBIBIYhMPI7YqyrtONQzSznJXrO4H5/GJ9LUGg0YFYydJxoYnwpj1s9SEN5KzZz4fYYAW6dr+VsowdFgamlPE/Hs8SzQZYzg0S+zjIc6iOWDDEc6uND+N12B9/VVu+mrd79o38wFCCdTeBSK6hxBii1eahxBlAtRbsDdmoiHGRNj1NZ7GM0NISvzM9oaIhiqwOO/wMgl4FsRpLf2KxGXpLNSLLInzH+CWBIA6RECIGUEiEUpDRACBSh8A3pXfGWdXfMgAAAAABJRU5ErkJggg==" alt="My Image">
</body>
</html>
```
### Can I add a tool icon to an existing tool package?
Yes, you can refer to the [preview icon](add-a-tool-icon.md#can-i-preview-the-tool-icon-image-before-adding-it-to-a-tool) section to generate the data URI and manually add the data URI to the tool's YAML file.
### Can I add tool icons for dark and light mode separately?
Yes, you can add the tool icon data URIs manually or run the command below in your tool project directory to automatically generate your tool YAML, use _--icon-light_ to add a custom tool icon for the light mode and use _--icon-dark_ to add a custom tool icon for the dark mode:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> --icon-light <light-tool-icon-path> --icon-dark <dark-tool-icon-path>
```
Here we use [an existing tool project](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart) as an example.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml --icon-light my_tool_package\icons\custom-tool-icon-light.png --icon-dark my_tool_package\icons\custom-tool-icon-dark.png
```
In the auto-generated tool YAML file, the light and dark tool icon data URIs are added in the `icon` field:
```yaml
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
icon:
dark: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAB00lEQVR4nI1SO2iTURT+7iNNb16a+Cg6iJWqRKwVRIrWV6GVUkrFdqiVShBaxIIi4iY4iouDoy4ODkKn4uQkDs5FfEzFYjEtJYQo5P/z35j/3uNw7Z80iHqHC/ec8z3OuQeMMcYYAHenU8n84YMAABw7mo93dEQpAIyBAyAiF1Kq8/Wrl5fHR1x6tjC9uPBcSrlZD4BxIgIgBCei+bnC6cGxSuWHEEIIUa58H7l0dWZqwlqSUjhq7oDWEoAL584Y6ymljDHGmM543BhvaPAsAKLfEjIyB6BeryPw796+EWidUInr16b5z6rWAYCmKXeEEADGRy+SLgXlFfLWbbWoyytULZ4f6Hee2yDgnAG4OVsoff20try08eX92vLSzJVJAJw3q7dISSnDMFx48UypeCa97cPHz7fu3Y/FYo1Go8nbCiAiIUStVus/eaKvN691IAQnsltI24wZY9Kp1Ju373K5bDKZNMa6gf5ZIWrG9/0g0K3W/wYIw3Dvnq6dO7KNMPwvgOf5x3uPHOrp9n3/HwBrLYCu3bv6Tg0PjU0d2L8PAEWfDKCtac6YIVrfKN2Zn8tkUqvfigBaR88Ya66uezMgl93+9Mmjxw8fJBIqWv7NAvwCHeuq7gEPU/QAAAAASUVORK5CYII=
light: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAB2UlEQVR4nH1SO2hUQRQ9c18K33u72cXs7jOL8UeQCCJoJaIgKAiCWKilaGNlYREFDRGNjayVWKiFFmITECFKJKIokQRRsDFENoooUchHU5qdWZ2512KymxcNOcUwc5nDuefeA2FhZpGFU0S0Mf5S0zpdF2FhISgopUREKfXj59yhoycmPn4GAKDncuXa9VtKKWYGACgowHOdc9a6g0eOA7mx8apzzlp76vRZoGXw6XMRsdb6nwSAmYnoQ3Xi5fBIdk2SiSMiCoKgNZslteruvX4ASikvSwAEAGDqdYhAXO+VypevkwODQ4+HnlGcq2mDNLwtZq5pvWP3AYRJ0Lq2uG5rWNgYFjaBVt+8c19E/jRaWvQgImPj1e279ufaN8elzly5K1/u6r7QZ51zrjmoBqHJ+TU/39ax5cy5i53bdnb39KXtLpr28OMLgiCfz78YHpmemi0W2piZWdIWaMmDCIDWet/ePUlS0toQUWM8yxG8jrVuw/qOTBw19rUiQUQoCGZm50z9txf8By3/K0Rh+PDRk8lv3+MoWklBBACmpmdKxcKn96O3b1SqC6FSyxOUgohk4pjZ9T8YeDX6ptye+PoSpNIrfkGv3747fOzk+UtXjTE+BM14M8tfl7BQR9VzUXEAAAAASUVORK5CYII=
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: my_tool
type: python
```
Note: Both light and dark icons are optional. If you set either a light or dark icon, it will be used in its respective mode, and the system default icon will be used in the other mode. | promptflow/docs/how-to-guides/develop-a-tool/add-a-tool-icon.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/add-a-tool-icon.md",
"repo_id": "promptflow",
"token_count": 4044
} | 2 |
# Process image in flow
PromptFlow defines a contract to represent image data.
## Data class
`promptflow.contracts.multimedia.Image`
Image class is a subclass of `bytes`, thus you can access the binary data by directly using the object. It has an extra attribute `source_url` to store the origin url of the image, which would be useful if you want to pass the url instead of content of image to APIs like GPT-4V model.
## Data type in flow input
Set the type of flow input to `image` and promptflow will treat it as an image.
## Reference image in prompt template
In prompt templates that support image (e.g. in OpenAI GPT-4V tool), using markdown syntax to denote that a template input is an image: `![image]({{test_image}})`. In this case, `test_image` will be substituted with base64 or source_url (if set) before sending to LLM model.
## Serialization/Deserialization
Promptflow uses a special dict to represent image.
`{"data:image/<mime-type>;<representation>": "<value>"}`
- `<mime-type>` can be html standard [mime](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) image types. Setting it to specific type can help previewing the image correctly, or it can be `*` for unknown type.
- `<representation>` is the image serialized representation, there are 3 supported types:
- url
It can point to a public accessable web url. E.g.
{"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}
- base64
It can be the base64 encoding of the image. E.g.
{"data:image/png;base64": "iVBORw0KGgoAAAANSUhEUgAAAGQAAABLAQMAAAC81rD0AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAABlBMVEUAAP7////DYP5JAAAAAWJLR0QB/wIt3gAAAAlwSFlzAAALEgAACxIB0t1+/AAAAAd0SU1FB+QIGBcKN7/nP/UAAAASSURBVDjLY2AYBaNgFIwCdAAABBoAAaNglfsAAAAZdEVYdGNvbW1lbnQAQ3JlYXRlZCB3aXRoIEdJTVDnr0DLAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIwLTA4LTI0VDIzOjEwOjU1KzAzOjAwkHdeuQAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMC0wOC0yNFQyMzoxMDo1NSswMzowMOEq5gUAAAAASUVORK5CYII="}
- path
It can reference an image file on local disk. Both absolute path and relative path are supported, but in the cases where the serialized image representation is stored in a file, relative to the containing folder of that file is recommended, as in the case of flow IO data. E.g.
{"data:image/png;path": "./my-image.png"}
Please note that `path` representation is not supported in Deployment scenario.
## Batch Input data
Batch input data containing image can be of 2 formats:
1. The same jsonl format of regular batch input, except that some column may be seriliazed image data or composite data type (dict/list) containing images. The serialized images can only be Url or Base64. E.g.
```json
{"question": "How many colors are there in the image?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}}
{"question": "What's this image about?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/404.png"}}
```
2. A folder containing a jsonl file under root path, which contains serialized image in File Reference format. The referenced file are stored in the folder and their relative path to the root path is used as path in the file reference. Here is a sample batch input, note that the name of `input.jsonl` is arbitrary as long as it's a jsonl file:
```
BatchInputFolder
|----input.jsonl
|----image1.png
|----image2.png
```
Content of `input.jsonl`
```json
{"question": "How many colors are there in the image?", "input_image": {"data:image/png;path": "image1.png"}}
{"question": "What's this image about?", "input_image": {"data:image/png;path": "image2.png"}}
```
| promptflow/docs/how-to-guides/process-image-in-flow.md/0 | {
"file_path": "promptflow/docs/how-to-guides/process-image-in-flow.md",
"repo_id": "promptflow",
"token_count": 1332
} | 3 |
# Azure OpenAI GPT-4 Turbo with Vision
## Introduction
Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them.
## Prerequisites
- Create AzureOpenAI resources
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
- Create a GPT-4 Turbo with Vision deployment
Browse to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.
Under Management select Deployments and Create a GPT-4 Turbo with Vision deployment by selecting model name: `gpt-4` and model version `vision-preview`.
## Connection
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes |
| deployment\_name | string | the language model to use | Yes |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| string | The text of one response of conversation |
| promptflow/docs/reference/tools-reference/aoai-gpt4-turbo-vision.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/aoai-gpt4-turbo-vision.md",
"repo_id": "promptflow",
"token_count": 1178
} | 4 |
# Development Guide
## Prerequisites
```bash
pip install -r requirements.txt
pip install pytest pytest-mock
```
## Run tests
- Create connection config file by `cp connections.json.example connections.json`.
- Fill in fields manually in `connections.json`.
- `cd tests` and run `pytest -s -v` to run all tests.
## Run tests in CI
Use this [workflow](https://github.com/microsoft/promptflow/actions/workflows/tools_secret_upload.yml) to upload secrets in key vault. The secrets you uploaded would be used in [tools tests](https://github.com/microsoft/promptflow/actions/workflows/tools_tests.yml). Note that you only need to upload the SECRETS.
> [!NOTE] After triggering the workflow, kindly request approval from Promptflow Support before proceeding further.
## PR check-in criteria
Here's a friendly heads-up! We've got some criteria for you to self-review your code changes. It's a great way to double-check your work and make sure everything is in order before you share it. Happy coding!
### Maintain code quality
The code you submit in your pull request should adhere to the following guidelines:
- **Maintain clean code**: The code should be clean, easy to understand, and well-structured to promote readability and maintainability.
- **Comment on your code**: Use comments to explain the purpose of certain code segments, particularly complex or non-obvious ones. This assists other developers in understanding your work.
- **Correct typos and grammatical errors**: Ensure that the code and file names are free from spelling mistakes and grammatical errors. This enhances the overall presentation and clarity of your code.
- **Avoid hard-coded values**: It is best to avoid hard-coding values unless absolutely necessary. Instead, use variables, constants, or configuration files, which can be easily modified without changing the source code.
- **Prevent code duplication**: Modify the original code to be more general instead of duplicating it. Code duplication can lead to longer, more complex code that is harder to maintain.
- **Implement effective error handling**: Good error handling is critical for troubleshooting customer issues and analyzing key metrics. Follow the guidelines provided in the [Error Handling Guideline](https://msdata.visualstudio.com/Vienna/_git/PromptFlow?path=/docs/error_handling_guidance.md&_a=preview) and reference the [exception.py](https://github.com/microsoft/promptflow/blob/main/src/promptflow-tools/promptflow/tools/exception.py) file for examples.
### Ensure high test coverage
Test coverage is crucial for maintaining code quality. Please adhere to the following guidelines:
- **Comprehensive Testing**: Include unit tests and e2e tests for any new functionality introduced.
- **Exception Testing**: Make sure to incorporate unit tests for all exceptions. These tests should verify error codes, error messages, and other important values. For reference, you can check out [TestHandleOpenAIError](https://github.com/microsoft/promptflow/blob/main/src/promptflow-tools/tests/test_handle_openai_error.py).
- **VSCode Testing**: If you're adding a new built-in tool, make sure to test your tool within the VSCode environment prior to submitting your PR. For more guidance on this, refer to [Use your tool from VSCode Extension](https://github.com/microsoft/promptflow/blob/main/docs/how-to-guides/develop-a-tool/create-and-use-tool-package.md#use-your-tool-from-vscode-extension).
### Add documents
Ensure to include documentation for your new built-in tool, following the guidelines below:
- **Error-Free Content**: Rectify all typographical and grammatical errors in the documentation. This will ensure clarity and readability.
- **Code Alignment**: The documentation should accurately reflect the current state of your code. Ensure that all described functionalities and behaviors match with your implemented code.
- **Functional Links**: Verify that all embedded links within the documentation are functioning properly, leading to the correct resources or references.
| promptflow/src/promptflow-tools/README.dev.md/0 | {
"file_path": "promptflow/src/promptflow-tools/README.dev.md",
"repo_id": "promptflow",
"token_count": 991
} | 5 |
promptflow.tools.aoai_gpt4v.AzureOpenAI.chat:
name: Azure OpenAI GPT-4 Turbo with Vision
description: Use Azure OpenAI GPT-4 Turbo with Vision to leverage AOAI vision ability.
type: custom_llm
module: promptflow.tools.aoai_gpt4v
class_name: AzureOpenAI
function: chat
tool_state: preview
icon:
light: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg==
dark: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC
default_prompt: |
# system:
As an AI assistant, your task involves interpreting images and responding to questions about the image.
Remember to provide accurate answers based on the information present in the image.
# user:
Can you tell me what the image depicts?
![image]({{image_input}})
inputs:
connection:
type:
- AzureOpenAIConnection
deployment_name:
type:
- string
enabled_by: connection
dynamic_list:
func_path: promptflow.tools.aoai_gpt4v.list_deployment_names
func_kwargs:
- name: connection
type:
- AzureOpenAIConnection
reference: ${inputs.connection}
allow_manual_entry: true
is_multi_select: false
temperature:
default: 1
type:
- double
top_p:
default: 1
type:
- double
max_tokens:
default: 512
type:
- int
stop:
default: ""
type:
- list
presence_penalty:
default: 0
type:
- double
frequency_penalty:
default: 0
type:
- double
| promptflow/src/promptflow-tools/promptflow/tools/yamls/aoai_gpt4v.yaml/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/yamls/aoai_gpt4v.yaml",
"repo_id": "promptflow",
"token_count": 1172
} | 6 |
#!/usr/bin/env python
import sys
import os
if os.environ.get('PF_INSTALLER') is None:
os.environ['PF_INSTALLER'] = 'PIP'
os.execl(sys.executable, sys.executable, '-m', 'promptflow._cli._pf.entry', *sys.argv[1:])
| promptflow/src/promptflow/pf/0 | {
"file_path": "promptflow/src/promptflow/pf",
"repo_id": "promptflow",
"token_count": 97
} | 7 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
| promptflow/src/promptflow/promptflow/_cli/_pf_azure/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf_azure/__init__.py",
"repo_id": "promptflow",
"token_count": 54
} | 8 |
.env
__pycache__/
.promptflow/*
!.promptflow/flow.tools.json
.runs/
| promptflow/src/promptflow/promptflow/_cli/data/entry_flow/gitignore/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/entry_flow/gitignore",
"repo_id": "promptflow",
"token_count": 30
} | 9 |
{# Please replace the template with your own prompt. #}
Write a simple {{text}} program that displays the greeting message.
| promptflow/src/promptflow/promptflow/_cli/data/standard_flow/hello.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/standard_flow/hello.jinja2",
"repo_id": "promptflow",
"token_count": 28
} | 10 |
import threading
from abc import ABC, abstractmethod
from promptflow.exceptions import UserErrorException
# to access azure ai services, we need to get the token with this audience
COGNITIVE_AUDIENCE = "https://cognitiveservices.azure.com/"
class TokenProviderABC(ABC):
def __init__(self) -> None:
super().__init__()
@abstractmethod
def get_token(self) -> str:
pass
class StaticTokenProvider(TokenProviderABC):
def __init__(self, token: str) -> None:
super().__init__()
self.token = token
def get_token(self) -> str:
return self.token
class AzureTokenProvider(TokenProviderABC):
_instance_lock = threading.Lock()
_instance = None
def __new__(cls, *args, **kwargs):
with cls._instance_lock:
if not cls._instance:
cls._instance = super().__new__(cls)
cls._instance._init_instance()
return cls._instance
def _init_instance(self):
try:
# Initialize a credential instance
from azure.identity import DefaultAzureCredential
self.credential = DefaultAzureCredential()
except ImportError as ex:
raise UserErrorException(
"Failed to initialize AzureTokenProvider. "
+ f"Please try 'pip install azure.identity' to install dependency, {ex.msg}."
)
def get_token(self):
audience = COGNITIVE_AUDIENCE
return self.credential.get_token(audience).token
| promptflow/src/promptflow/promptflow/_core/token_provider.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/token_provider.py",
"repo_id": "promptflow",
"token_count": 629
} | 11 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from sqlalchemy import INTEGER, TEXT, Column
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm import declarative_base
from promptflow._sdk._constants import ORCHESTRATOR_TABLE_NAME
from promptflow._sdk._errors import ExperimentNotFoundError
from .retry import sqlite_retry
from .session import mgmt_db_session
Base = declarative_base()
class Orchestrator(Base):
__tablename__ = ORCHESTRATOR_TABLE_NAME
experiment_name = Column(TEXT, primary_key=True)
pid = Column(INTEGER, nullable=True)
status = Column(TEXT, nullable=False)
# schema version, increase the version number when you change the schema
__pf_schema_version__ = "1"
@staticmethod
@sqlite_retry
def create_or_update(orchestrator: "Orchestrator") -> None:
session = mgmt_db_session()
experiment_name = orchestrator.experiment_name
try:
session.add(orchestrator)
session.commit()
except IntegrityError:
session = mgmt_db_session()
# Remove the _sa_instance_state
update_dict = {k: v for k, v in orchestrator.__dict__.items() if not k.startswith("_")}
session.query(Orchestrator).filter(Orchestrator.experiment_name == experiment_name).update(update_dict)
session.commit()
@staticmethod
@sqlite_retry
def get(experiment_name: str, raise_error=True) -> "Orchestrator":
with mgmt_db_session() as session:
orchestrator = session.query(Orchestrator).filter(Orchestrator.experiment_name == experiment_name).first()
if orchestrator is None and raise_error:
raise ExperimentNotFoundError(f"The experiment {experiment_name!r} hasn't been started yet.")
return orchestrator
@staticmethod
@sqlite_retry
def delete(name: str) -> None:
with mgmt_db_session() as session:
session.query(Orchestrator).filter(Orchestrator.experiment_name == name).delete()
session.commit()
| promptflow/src/promptflow/promptflow/_sdk/_orm/orchestrator.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_orm/orchestrator.py",
"repo_id": "promptflow",
"token_count": 807
} | 12 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import typing
from dataclasses import asdict, dataclass
from flask import Response, render_template
from promptflow._constants import SpanAttributeFieldName, SpanFieldName
from promptflow._sdk._service import Namespace, Resource
from promptflow._sdk._service.utils.utils import get_client_from_request
api = Namespace("ui", description="UI")
# parsers for query parameters
trace_parser = api.parser()
trace_parser.add_argument("session", type=str, required=False)
# use @dataclass for strong type
@dataclass
class TraceUIParser:
session_id: typing.Optional[str] = None
@staticmethod
def from_request() -> "TraceUIParser":
args = trace_parser.parse_args()
return TraceUIParser(
session_id=args.session,
)
@api.route("/traces")
class TraceUI(Resource):
def get(self):
from promptflow import PFClient
client: PFClient = get_client_from_request()
args = TraceUIParser.from_request()
line_runs = client._traces.list_line_runs(
session_id=args.session_id,
)
spans = client._traces.list_spans(
session_id=args.session_id,
)
main_spans, eval_spans = [], []
for span in spans:
attributes = span._content[SpanFieldName.ATTRIBUTES]
if SpanAttributeFieldName.REFERENCED_LINE_RUN_ID in attributes:
eval_spans.append(span)
else:
main_spans.append(span)
summaries = [asdict(line_run) for line_run in line_runs]
trace_ui_dict = {
"summaries": summaries,
"traces": [span._content for span in main_spans],
"evaluation_traces": [span._content for span in eval_spans],
}
# concat data for rendering dummy UI
summary = [
{
"trace_id": line_run.line_run_id,
"content": json.dumps(asdict(line_run), indent=4),
}
for line_run in line_runs
]
traces = [
{
"trace_id": span.trace_id,
"span_id": span.span_id,
"content": json.dumps(span._content, indent=4),
}
for span in main_spans
]
eval_traces = [
{
"trace_id": span.trace_id,
"span_id": span.span_id,
"content": json.dumps(span._content, indent=4),
}
for span in eval_spans
]
return Response(
render_template(
"ui_traces.html",
trace_ui_dict=json.dumps(trace_ui_dict),
summary=summary,
traces=traces,
eval_traces=eval_traces,
),
mimetype="text/html",
)
| promptflow/src/promptflow/promptflow/_sdk/_service/apis/ui.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/apis/ui.py",
"repo_id": "promptflow",
"token_count": 1399
} | 13 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
import re
from typing import Any
from promptflow._sdk._serving._errors import InvalidConnectionData, MissingConnectionProvider
from promptflow._sdk._serving.extension.default_extension import AppExtension
from promptflow._sdk._serving.monitor.data_collector import FlowDataCollector
from promptflow._sdk._serving.monitor.flow_monitor import FlowMonitor
from promptflow._sdk._serving.monitor.metrics import MetricsRecorder
from promptflow._sdk._serving.utils import decode_dict, get_pf_serving_env, normalize_connection_name
from promptflow._utils.retry_utils import retry
from promptflow._version import VERSION
from promptflow.contracts.flow import Flow
USER_AGENT = f"promptflow-cloud-serving/{VERSION}"
AML_DEPLOYMENT_RESOURCE_ID_REGEX = "/subscriptions/(.*)/resourceGroups/(.*)/providers/Microsoft.MachineLearningServices/workspaces/(.*)/onlineEndpoints/(.*)/deployments/(.*)" # noqa: E501
AML_CONNECTION_PROVIDER_TEMPLATE = "azureml:/subscriptions/{}/resourceGroups/{}/providers/Microsoft.MachineLearningServices/workspaces/{}" # noqa: E501
class AzureMLExtension(AppExtension):
"""AzureMLExtension is used to create extension for azureml serving."""
def __init__(self, logger, **kwargs):
super().__init__(logger=logger, **kwargs)
self.logger = logger
# parse promptflow project path
project_path: str = get_pf_serving_env("PROMPTFLOW_PROJECT_PATH")
if not project_path:
model_dir = os.getenv("AZUREML_MODEL_DIR", ".")
model_rootdir = os.listdir(model_dir)[0]
self.model_name = model_rootdir
project_path = os.path.join(model_dir, model_rootdir)
self.model_root_path = project_path
# mlflow support in base extension
self.project_path = self._get_mlflow_project_path(project_path)
# initialize connections or connection provider
# TODO: to be deprecated, remove in next major version
self.connections = self._get_env_connections_if_exist()
self.endpoint_name: str = None
self.deployment_name: str = None
self.connection_provider = None
self.credential = _get_managed_identity_credential_with_retry()
if len(self.connections) == 0:
self._initialize_connection_provider()
# initialize metrics common dimensions if exist
self.common_dimensions = {}
if self.endpoint_name:
self.common_dimensions["endpoint"] = self.endpoint_name
if self.deployment_name:
self.common_dimensions["deployment"] = self.deployment_name
env_dimensions = self._get_common_dimensions_from_env()
self.common_dimensions.update(env_dimensions)
# initialize flow monitor
data_collector = FlowDataCollector(self.logger)
metrics_recorder = self._get_metrics_recorder()
self.flow_monitor = FlowMonitor(
self.logger, self.get_flow_name(), data_collector, metrics_recorder=metrics_recorder
)
def get_flow_project_path(self) -> str:
return self.project_path
def get_flow_name(self) -> str:
return os.path.basename(self.model_root_path)
def get_connection_provider(self) -> str:
return self.connection_provider
def get_blueprints(self):
return self._get_default_blueprints()
def get_flow_monitor(self) -> FlowMonitor:
return self.flow_monitor
def get_override_connections(self, flow: Flow) -> (dict, dict):
connection_names = flow.get_connection_names()
connections = {}
connections_name_overrides = {}
for connection_name in connection_names:
# replace " " with "_" in connection name
normalized_name = normalize_connection_name(connection_name)
if normalized_name in os.environ:
override_conn = os.environ[normalized_name]
data_override = False
# try load connection as a json
try:
# data override
conn_data = json.loads(override_conn)
data_override = True
except ValueError:
# name override
self.logger.debug(f"Connection value is not json, enable name override for {connection_name}.")
connections_name_overrides[connection_name] = override_conn
if data_override:
try:
# try best to convert to connection, this is only for azureml deployment.
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
conn = ArmConnectionOperations._convert_to_connection_dict(connection_name, conn_data)
connections[connection_name] = conn
except Exception as e:
self.logger.warn(f"Failed to convert connection data to connection: {e}")
raise InvalidConnectionData(connection_name)
if len(connections_name_overrides) > 0:
self.logger.info(f"Connection name overrides: {connections_name_overrides}")
if len(connections) > 0:
self.logger.info(f"Connections data overrides: {connections.keys()}")
self.connections.update(connections)
return self.connections, connections_name_overrides
def raise_ex_on_invoker_initialization_failure(self, ex: Exception):
from promptflow.azure.operations._arm_connection_operations import UserAuthenticationError
# allow lazy authentication for UserAuthenticationError
return not isinstance(ex, UserAuthenticationError)
def get_user_agent(self) -> str:
return USER_AGENT
def get_metrics_common_dimensions(self):
return self.common_dimensions
def get_credential(self):
return self.credential
def _get_env_connections_if_exist(self):
# For local test app connections will be set.
connections = {}
env_connections = get_pf_serving_env("PROMPTFLOW_ENCODED_CONNECTIONS")
if env_connections:
connections = decode_dict(env_connections)
return connections
def _get_metrics_recorder(self):
# currently only support exporting it to azure monitor(application insights)
# TODO: add support for dynamic loading thus user can customize their own exporter.
custom_dimensions = self.get_metrics_common_dimensions()
try:
from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
# check whether azure monitor instrumentation key is set
instrumentation_key = os.getenv("AML_APP_INSIGHTS_KEY") or os.getenv("APPINSIGHTS_INSTRUMENTATIONKEY")
if instrumentation_key:
self.logger.info("Initialize metrics recorder with azure monitor metrics exporter...")
exporter = AzureMonitorMetricExporter(connection_string=f"InstrumentationKey={instrumentation_key}")
reader = PeriodicExportingMetricReader(exporter=exporter, export_interval_millis=60000)
return MetricsRecorder(self.logger, reader=reader, common_dimensions=custom_dimensions)
else:
self.logger.info("Azure monitor metrics exporter is not enabled, metrics will not be collected.")
except ImportError:
self.logger.warning("No metrics exporter module found, metrics will not be collected.")
return None
def _initialize_connection_provider(self):
# parse connection provider
self.connection_provider = get_pf_serving_env("PROMPTFLOW_CONNECTION_PROVIDER")
if not self.connection_provider:
pf_override = os.getenv("PRT_CONFIG_OVERRIDE", None)
if pf_override:
env_conf = pf_override.split(",")
env_conf_list = [setting.split("=") for setting in env_conf]
settings = {setting[0]: setting[1] for setting in env_conf_list}
self.subscription_id = settings.get("deployment.subscription_id", None)
self.resource_group = settings.get("deployment.resource_group", None)
self.workspace_name = settings.get("deployment.workspace_name", None)
self.endpoint_name = settings.get("deployment.endpoint_name", None)
self.deployment_name = settings.get("deployment.deployment_name", None)
else:
deploy_resource_id = os.getenv("AML_DEPLOYMENT_RESOURCE_ID", None)
if deploy_resource_id:
match_result = re.match(AML_DEPLOYMENT_RESOURCE_ID_REGEX, deploy_resource_id)
if len(match_result.groups()) == 5:
self.subscription_id = match_result.group(1)
self.resource_group = match_result.group(2)
self.workspace_name = match_result.group(3)
self.endpoint_name = match_result.group(4)
self.deployment_name = match_result.group(5)
else:
# raise exception if not found any valid connection provider setting
raise MissingConnectionProvider(
message="Missing connection provider, please check whether 'PROMPTFLOW_CONNECTION_PROVIDER' "
"is in your environment variable list."
) # noqa: E501
self.connection_provider = AML_CONNECTION_PROVIDER_TEMPLATE.format(
self.subscription_id, self.resource_group, self.workspace_name
) # noqa: E501
def _get_managed_identity_credential_with_retry(**kwargs):
from azure.identity import ManagedIdentityCredential
from azure.identity._constants import EnvironmentVariables
class ManagedIdentityCredentialWithRetry(ManagedIdentityCredential):
def __init__(self, **kwargs: Any) -> None:
client_id = kwargs.pop("client_id", None) or os.environ.get(EnvironmentVariables.AZURE_CLIENT_ID)
super().__init__(client_id=client_id, **kwargs)
@retry(Exception)
def get_token(self, *scopes, **kwargs):
return super().get_token(*scopes, **kwargs)
return ManagedIdentityCredentialWithRetry(**kwargs)
| promptflow/src/promptflow/promptflow/_sdk/_serving/extension/azureml_extension.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/extension/azureml_extension.py",
"repo_id": "promptflow",
"token_count": 4406
} | 14 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import copy
import json
import os
import platform
import signal
import subprocess
import sys
import tempfile
import uuid
from concurrent import futures
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from pathlib import Path
from typing import Union
import psutil
# For the process started in detach mode, stdout/std error will be none.
# To avoid exception to stdout/stderr calls in the dependency package, point stdout/stderr to devnull.
if sys.stdout is None:
sys.stdout = open(os.devnull, "w")
if sys.stderr is None:
sys.stderr = sys.stdout
from promptflow._sdk._constants import (
PF_TRACE_CONTEXT,
PROMPT_FLOW_DIR_NAME,
ContextAttributeKey,
ExperimentNodeRunStatus,
ExperimentNodeType,
ExperimentStatus,
FlowRunProperties,
RunTypes,
)
from promptflow._sdk._errors import (
ExperimentCommandRunError,
ExperimentHasCycle,
ExperimentNodeRunFailedError,
ExperimentNotFoundError,
ExperimentValueError,
)
from promptflow._sdk._orm.experiment import Experiment as ORMExperiment
from promptflow._sdk._orm.experiment_node_run import ExperimentNodeRun as ORMExperimentNodeRun
from promptflow._sdk._orm.orchestrator import Orchestrator as ORMOrchestrator
from promptflow._sdk._orm.run_info import RunInfo as ORMRunInfo
from promptflow._sdk._submitter import RunSubmitter
from promptflow._sdk._submitter.utils import (
SubmitterHelper,
_calculate_snapshot,
_start_process_in_background,
_stop_orchestrator_process,
_windows_stop_handler,
)
from promptflow._sdk.entities import Run
from promptflow._sdk.entities._experiment import Experiment, ExperimentTemplate
from promptflow._sdk.entities._flow import ProtectedFlow
from promptflow._sdk.operations import RunOperations
from promptflow._sdk.operations._local_storage_operations import LocalStorageOperations
from promptflow._utils.inputs_mapping_utils import apply_inputs_mapping
from promptflow._utils.load_data import load_data
from promptflow._utils.logger_utils import LoggerFactory
from promptflow.contracts.run_info import Status
from promptflow.contracts.run_mode import RunMode
from promptflow.exceptions import ErrorTarget, UserErrorException
logger = LoggerFactory.get_logger(name=__name__)
class ExperimentOrchestrator:
"""Experiment orchestrator, responsible for experiment running and status checking."""
def __init__(self, client, experiment: Experiment = None):
self.run_operations = client.runs
self.experiment_operations = client._experiments
self._client = client
self.experiment = experiment
self._nodes = {node.name: node for node in self.experiment.nodes} if experiment else {}
# A key-value pair of node name and run info
self._node_runs = {}
def test(
self, flow: Union[str, Path], template: ExperimentTemplate, inputs=None, environment_variables=None, **kwargs
):
"""Test flow in experiment.
:param flow_path: Flow to test.
:type flow_path: Union[str, Path]
:param template: Experiment template to test.
:type template: ~promptflow.entities.ExperimentTemplate
:param inputs: Input parameters for flow.
:type inputs: dict
:param environment_variables: Environment variables for flow.
:type environment_variables: dict
"""
flow_path = Path(flow).resolve().absolute()
logger.info(f"Testing flow {flow_path.as_posix()} in experiment {template._base_path.absolute().as_posix()}.")
inputs, environment_variables = inputs or {}, environment_variables or {}
# Find start nodes, must be flow nodes
start_nodes = [
node
for node in template.nodes
if node.type == ExperimentNodeType.FLOW
and ProtectedFlow._get_flow_definition(node.path) == ProtectedFlow._get_flow_definition(flow_path)
]
if not start_nodes:
raise ExperimentValueError(f"Flow {flow_path.as_posix()} not found in experiment {template.dir_name!r}.")
logger.info(f"Found start nodes {[node.name for node in start_nodes]} for experiment.")
nodes_to_test = ExperimentHelper.resolve_nodes_to_execute(template, start_nodes)
logger.info(f"Resolved nodes to test {[node.name for node in nodes_to_test]} for experiment.")
# If inputs, use the inputs as experiment data, else read the first line in template data
test_context = ExperimentTemplateTestContext(
template,
inputs=inputs,
environment_variables=environment_variables,
output_path=kwargs.get("output_path"),
session=kwargs.get("session"),
)
for node in nodes_to_test:
logger.info(f"Testing node {node.name}...")
if node in start_nodes:
# Start nodes inputs should be updated, as original value could be a constant without data reference.
# Filter unknown key out to avoid warning (case: user input with eval key to override data).
node.inputs = {**node.inputs, **{k: v for k, v in inputs.items() if k in node.inputs}}
node_result = self._test_node(node, test_context)
test_context.add_node_result(node.name, node_result)
logger.info("Testing completed. See full logs at %s.", test_context.output_path.as_posix())
return test_context.node_results
def _test_node(self, node, test_context) -> Run:
if node.type == ExperimentNodeType.FLOW:
return self._test_flow_node(node, test_context)
elif node.type == ExperimentNodeType.COMMAND:
return self._test_command_node(node, test_context)
raise ExperimentValueError(f"Unknown experiment node {node.name!r} type {node.type!r}")
def _test_flow_node(self, node, test_context):
# Resolve experiment related inputs
inputs_mapping = ExperimentHelper.resolve_column_mapping(node.name, node.inputs, test_context.test_inputs)
data, runs = ExperimentHelper.get_referenced_data_and_run(
node.name, node.inputs, test_context.test_data, test_context.node_results
)
# Add data, run inputs/outputs to binding context for inputs mapping resolve.
binding_context = {**{f"data.{k}": v for k, v in data.items()}, **{f"{k}.outputs": v for k, v in runs.items()}}
binding_context.update(**{f"{k}.inputs": test_context.node_inputs.get(k, {}) for k in runs.keys()})
logger.debug(f"Node {node.name!r} binding context {binding_context}.")
# E.g. inputs_mapping: {'url': '${data.my_data.url}'} inputs_data: {"data.my_data": {"url": "http://abc"}}
inputs = apply_inputs_mapping(inputs=binding_context, inputs_mapping=inputs_mapping)
logger.debug(f"Resolved node {node.name!r} inputs {inputs}.")
test_context.add_node_inputs(node.name, inputs)
node_context = test_context.get_node_context(node.name, is_flow=True, test=True)
return self._client.flows.test(
flow=node.path,
environment_variables={**test_context.environment_variables, **node_context},
inputs=inputs,
output_path=test_context.output_path / node.name,
dump_test_result=True,
stream_output=False,
run_id=test_context.node_name_to_id[node.name],
session=test_context.session,
)
def _test_command_node(self, *args, **kwargs):
raise NotImplementedError
def start(self, nodes=None, from_nodes=None):
"""Start an execution of nodes.
Start an orchestrator to schedule node execution according to topological ordering.
:param nodes: Nodes to be executed.
:type nodes: list
:param from_nodes: The branches in experiment to be executed.
:type from_nodes: list
:return: Experiment info.
:rtype: ~promptflow.entities.Experiment
"""
# Start experiment
experiment = self.experiment
logger.info(f"Starting experiment {experiment.name}.")
experiment.status = ExperimentStatus.IN_PROGRESS
experiment.last_start_time = datetime.utcnow().isoformat()
experiment.last_end_time = None
context = ExperimentTemplateContext(experiment)
self.experiment_operations.create_or_update(experiment)
self._update_orchestrator_record(status=ExperimentStatus.IN_PROGRESS, pid=os.getpid())
self._start_orchestrator(nodes=nodes, from_nodes=from_nodes, context=context)
return experiment
def async_start(self, executable_path=None, nodes=None, from_nodes=None):
"""Start an asynchronous execution of an experiment.
:param executable_path: Python path when executing the experiment.
:type executable_path: str
:param nodes: Nodes to be executed.
:type nodes: list
:param from_nodes: The branches in experiment to be executed.
:type from_nodes: list
:return: Experiment info.
:rtype: ~promptflow.entities.Experiment
"""
logger.info(f"Queuing experiment {self.experiment.name}.")
self._update_orchestrator_record(status=ExperimentStatus.QUEUING)
executable_path = executable_path or sys.executable
args = [executable_path, __file__, "start", "--experiment", self.experiment.name]
if nodes:
args = args + ["--nodes"] + nodes
if from_nodes:
args = args + ["--from-nodes"] + from_nodes
# Start an orchestrator process using detach mode
logger.debug(f"Start experiment {self.experiment.name} in background.")
_start_process_in_background(args, executable_path)
return self.experiment
def _update_orchestrator_record(self, status, pid=None):
"""Update orchestrator table data"""
orm_orchestrator = ORMOrchestrator(
experiment_name=self.experiment.name,
pid=pid,
status=status,
)
ORMOrchestrator.create_or_update(orm_orchestrator)
self.experiment.status = status
last_start_time, last_end_time = None, None
if status == ExperimentStatus.IN_PROGRESS:
last_start_time = datetime.utcnow().isoformat()
elif status == ExperimentStatus.TERMINATED:
last_end_time = datetime.utcnow().isoformat()
return ORMExperiment.get(name=self.experiment.name).update(
status=self.experiment.status,
last_start_time=last_start_time,
last_end_time=last_end_time,
node_runs=json.dumps(self.experiment.node_runs),
)
def _start_orchestrator(self, context, nodes=None, from_nodes=None):
"""
Orchestrate the execution of nodes in the experiment.
Determine node execution order through topological sorting.
:param context: Experiment context.
:type context: ~promptflow._sdk._submitter.ExperimentTemplateContext
:param nodes: Nodes to be executed.
:type nodes: list
:param from_nodes: The branches in experiment to be executed.
:type from_nodes: list
"""
def prepare_edges(node):
"""Get all in-degree nodes of this node."""
node_names = set()
for input_value in node.inputs.values():
if ExperimentHelper._is_node_reference(input_value):
referenced_node_name = input_value.split(".")[0].replace("${", "")
node_names.add(referenced_node_name)
return node_names
def generate_node_mapping_by_nodes(from_nodes):
all_node_edges_mapping = {node.name: prepare_edges(node) for node in self.experiment.nodes}
node_edges_mapping, next_nodes = {node: all_node_edges_mapping[node] for node in from_nodes}, from_nodes
while next_nodes:
linked_nodes = set()
for node in next_nodes:
in_degree_nodes = {k: v for k, v in all_node_edges_mapping.items() if node in v}
linked_nodes.update(set(in_degree_nodes.keys()) - set(node_edges_mapping.keys()))
node_edges_mapping.update(in_degree_nodes)
next_nodes = linked_nodes
all_nodes = set()
for nodes in node_edges_mapping.values():
all_nodes.update(nodes)
pre_nodes = all_nodes - set(node_edges_mapping.keys())
logger.debug(f"Experiment nodes edges: {node_edges_mapping!r}, pre nodes: {pre_nodes!r}")
return node_edges_mapping, pre_nodes
def get_next_executable_nodes(completed_node=None):
"""Get the node to be executed in the experiment.
:param completed_node: The completed node is used to update node-edge mapping in experiment run.
:type completed_node: str
"""
if completed_node:
# Update node edge mapping in current experiment run.
# Remove the edge of the node that has been completed.
for referenced_nodes in node_edges_mapping.values():
referenced_nodes.discard(completed_node)
next_executable_nodes = [
self._nodes[node_name] for node_name, edges in node_edges_mapping.items() if len(edges) == 0
]
for node in next_executable_nodes:
node_edges_mapping.pop(node.name)
return next_executable_nodes
def check_in_degree_node_outputs(pre_nodes):
"""Check the input data of nodes already exists, it not return false."""
node_runs = {
node_name: next(filter(lambda x: x["status"] == ExperimentNodeRunStatus.COMPLETED, node_runs), None)
for node_name, node_runs in self.experiment.node_runs.items()
}
is_in_degree_nodes_ready = True
for in_degree_node in pre_nodes:
is_in_degree_nodes_ready = in_degree_node in node_runs
if node_runs.get(in_degree_node, None):
node_run_info = self.run_operations.get(node_runs[in_degree_node]["name"])
self._node_runs[in_degree_node] = node_run_info
output_path = node_run_info.properties.get("output_path", None)
is_in_degree_nodes_ready = is_in_degree_nodes_ready and Path(output_path).exists()
else:
is_in_degree_nodes_ready = False
logger.warning(f"Cannot find the outputs of {in_degree_node}")
return is_in_degree_nodes_ready
def stop_process():
"""
Post process of stop experiment. It will update status of all running node to canceled.
And update status of experiment to terminated. Then terminate the orchestrator process.
"""
executor.shutdown(wait=False)
for future, node in future_to_node_run.items():
if future.running():
# Update status of running nodes to canceled.
node.update_exp_run_node(status=ExperimentNodeRunStatus.CANCELED)
self.experiment._append_node_run(node.node.name, ORMRunInfo.get(node.name))
# Update status experiment to terminated.
self._update_orchestrator_record(status=ExperimentStatus.TERMINATED)
# Terminate orchestrator process.
sys.exit(1)
if platform.system() == "Windows":
import threading
# Because of signal handler not works well in Windows, orchestrator starts a daemon thread
# that creates named pipe to receive cancel signals from other processes.
# Related issue of signal handler in Windows: https://bugs.python.org/issue26350
pipe_thread = threading.Thread(
target=_windows_stop_handler,
args=(
self.experiment.name,
stop_process,
),
)
pipe_thread.daemon = True
pipe_thread.start()
else:
def stop_handler(signum, frame):
"""
Post-processing when the experiment is canceled.
Terminate all executing nodes and update node status.
"""
if signum == signal.SIGTERM:
stop_process()
signal.signal(signal.SIGTERM, stop_handler)
# TODO set max workers
executor = ThreadPoolExecutor(max_workers=None)
future_to_node_run = {}
if from_nodes:
# Executed from specified nodes
# check in-degree nodes outputs exist
node_edges_mapping, pre_nodes = generate_node_mapping_by_nodes(from_nodes)
if not check_in_degree_node_outputs(pre_nodes):
raise UserErrorException(f"The output(s) of in-degree for nodes {from_nodes} do not exist.")
next_execute_nodes = [self._nodes[name] for name in from_nodes]
elif nodes:
# Executed specified nodes
# check in-degree nodes outputs exist
pre_nodes = set()
node_mapping = {node.name: node for node in self.experiment.nodes}
for node_name in nodes:
pre_nodes.update(prepare_edges(node_mapping[node_name]))
if not check_in_degree_node_outputs(pre_nodes):
raise UserErrorException(f"The output(s) of in-degree of nodes {nodes} do not exist.")
node_edges_mapping = {}
next_execute_nodes = [self._nodes[name] for name in nodes]
else:
# Execute all nodes in experiment.
node_edges_mapping = {node.name: prepare_edges(node) for node in self.experiment.nodes}
logger.debug(f"Experiment nodes edges: {node_edges_mapping!r}")
next_execute_nodes = get_next_executable_nodes()
while len(next_execute_nodes) != 0 or len(future_to_node_run) != 0:
for node in next_execute_nodes:
# Start node execution.
logger.info(f"Running node {node.name}.")
exp_node_run = ExperimentNodeRun(
node=node,
context=context,
experiment=self.experiment,
node_runs=self._node_runs,
client=self._client,
)
future_to_node_run[executor.submit(exp_node_run.submit)] = exp_node_run
completed_futures, _ = futures.wait(future_to_node_run.keys(), return_when=futures.FIRST_COMPLETED)
next_execute_nodes = []
for future in completed_futures:
node_name = future_to_node_run[future].node.name
try:
self._node_runs[node_name] = future.result()
if not nodes:
# Get next executable nodes by completed nodes.
next_execute_nodes.extend(get_next_executable_nodes(completed_node=node_name))
self.experiment._append_node_run(node_name, self._node_runs[node_name])
del future_to_node_run[future]
except Exception as e:
executor.shutdown(wait=False)
# Handle failed execution, update orchestrator and experiment info
self.experiment._append_node_run(node_name, future_to_node_run[future])
self._update_orchestrator_record(status=ExperimentStatus.TERMINATED)
logger.warning(f"Node {future_to_node_run[future].node.name} failed to execute with error {e}.")
raise ExperimentNodeRunFailedError(
f"Node {future_to_node_run[future].node.name} failed to execute with error {e}."
)
executor.shutdown(wait=False)
self._update_orchestrator_record(status=ExperimentStatus.TERMINATED)
def stop(self):
"""Stop in progress experiment.
If experiment is not in progress, it will raise user error.
In Linux, it will send terminate signal to orchestrator process. In Windows, it will pass signal by named pipe.
When process receives the terminate signal, it will update running nodes to canceled and terminate the process.
:return: Stopped experiment info.
:rtype: ~promptflow.entities.Experiment
"""
orchestrator = ORMOrchestrator.get(experiment_name=self.experiment.name)
if orchestrator.status in [ExperimentStatus.NOT_STARTED, ExperimentStatus.QUEUING, ExperimentStatus.TERMINATED]:
raise UserErrorException(
target=ErrorTarget.CONTROL_PLANE_SDK,
message="Experiment cannot be stopped if it is not started.",
)
_stop_orchestrator_process(orchestrator)
@staticmethod
def get_status(experiment_name):
"""Check the status of the orchestrator
The status recorded in database and process status may be inconsistent.
Need to check the orchestrator process status.
:return: Orchestrator status.
:rtype: str
"""
def set_orchestrator_terminated():
logger.info(
"The orchestrator process terminates abnormally, "
f"status of {experiment_name} is updated to terminated."
)
orm_orchestrator.status = ExperimentStatus.TERMINATED
ORMOrchestrator.create_or_update(orm_orchestrator)
ORMExperiment.get(name=experiment_name).update(status=ExperimentStatus.TERMINATED)
try:
orm_orchestrator = ORMOrchestrator.get(experiment_name=experiment_name)
if orm_orchestrator.status == ExperimentStatus.IN_PROGRESS:
try:
process = psutil.Process(orm_orchestrator.pid)
if experiment_name not in process.cmdline():
# This process is not the process used to start the orchestrator
# update experiment to terminated.
set_orchestrator_terminated()
return orm_orchestrator.status
except psutil.NoSuchProcess:
# The process is terminated abnormally, update experiment to terminated.
set_orchestrator_terminated()
return ExperimentStatus.TERMINATED
else:
return orm_orchestrator.status
except ExperimentNotFoundError:
return ExperimentStatus.NOT_STARTED
class ExperimentNodeRun(Run):
"""Experiment node run, includes experiment running context, like data, inputs and runs."""
def __init__(self, node, experiment, context, node_runs, client, **kwargs):
from promptflow._sdk._configuration import Configuration
self.node = node
self.context = context
self.experiment = experiment
self.experiment_data = {data.name: data for data in experiment.data}
self.experiment_inputs = {input.name: input for input in experiment.inputs}
self.node_runs = node_runs
self.client = client
self.run_operations = self.client.runs
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
self.node_context = self.context.get_node_context(
node.name, is_flow=node.type == ExperimentNodeType.FLOW, test=False
)
# Config run output path to experiment output folder
run_output_path = (Path(experiment._output_dir) / "runs" / node.name).resolve().absolute().as_posix()
super().__init__(
# Use node name as prefix for run name?
type=RunTypes.COMMAND if node.type == ExperimentNodeType.COMMAND else RunTypes.BATCH,
name=f"{node.name}_attempt{timestamp}",
display_name=node.display_name or node.name,
column_mapping=node.inputs,
variant=getattr(node, "variant", None),
flow=self._get_node_path(),
outputs=getattr(node, "outputs", None),
connections=getattr(node, "connections", None),
command=getattr(node, "command", None),
environment_variables=node.environment_variables,
config=Configuration(overrides={Configuration.RUN_OUTPUT_PATH: run_output_path}),
**kwargs,
)
self._resolve_column_mapping()
self._input_data = self._resolve_input_dirs()
self.snapshot_id = _calculate_snapshot(self.column_mapping, self._input_data, self.flow)
def _resolve_column_mapping(self):
"""Resolve column mapping with experiment inputs to constant values."""
logger.info(f"Start resolve node {self.node.name!r} column mapping.")
resolved_mapping = {}
for name, value in self.column_mapping.items():
if not isinstance(value, str) or not value.startswith("${inputs."):
resolved_mapping[name] = value
continue
input_name = value.split(".")[1].replace("}", "")
if input_name not in self.experiment_inputs:
raise ExperimentValueError(
f"Node {self.node.name!r} inputs {value!r} related experiment input {input_name!r} not found."
)
resolved_mapping[name] = self.experiment_inputs[input_name].default
logger.debug(f"Resolved node {self.node.name!r} column mapping {resolved_mapping}.")
self.column_mapping = resolved_mapping
def _resolve_input_dirs(self):
logger.info("Start resolve node %s input dirs.", self.node.name)
# Get the node referenced data and run
referenced_data, referenced_run = ExperimentHelper.get_referenced_data_and_run(
node_name=self.node.name,
column_mapping=self.column_mapping,
experiment_data=self.experiment_data,
experiment_runs=self.node_runs,
)
if len(referenced_data) > 1:
raise ExperimentValueError(
f"Experiment flow node {self.node.name!r} has multiple data inputs {referenced_data}, "
"only 1 is expected."
)
if len(referenced_run) > 1:
raise ExperimentValueError(
f"Experiment flow node {self.node.name!r} has multiple run inputs {referenced_run}, "
"only 1 is expected."
)
(data_name, data_obj) = next(iter(referenced_data.items())) if referenced_data else (None, None)
(run_name, run_obj) = next(iter(referenced_run.items())) if referenced_run else (None, None)
logger.debug(f"Resolve node {self.node.name} referenced data {data_name!r}, run {run_name!r}.")
# Build inputs from experiment data and run
result = {}
if data_obj:
result.update({f"data.{data_name}": data_obj.path})
if run_obj:
result.update(ExperimentHelper.resolve_binding_from_run(run_name, run_obj, self.run_operations))
result = {k: str(Path(v).resolve()) for k, v in result.items() if v is not None}
logger.debug(f"Resolved node {self.node.name} input dirs {result}.")
return result
def _get_node_path(self):
if self.node.type == ExperimentNodeType.FLOW:
return self.node.path
elif self.node.type == ExperimentNodeType.COMMAND:
return self.node.code
elif self.node.type == ExperimentNodeType.CHAT_GROUP:
raise NotImplementedError("Chat group node in experiment is not supported yet.")
raise ExperimentValueError(f"Unknown experiment node {self.node.name!r} type {self.node.type!r}")
def _run_node(self) -> Run:
if self.node.type == ExperimentNodeType.FLOW:
return self._run_flow_node()
elif self.node.type == ExperimentNodeType.COMMAND:
return self._run_command_node()
elif self.node.type == ExperimentNodeType.CHAT_GROUP:
return self._run_chat_group_node()
raise ExperimentValueError(f"Unknown experiment node {self.node.name!r} type {self.node.type!r}")
def _run_flow_node(self):
logger.debug(f"Creating flow run {self.name}")
exp_node_run_submitter = ExperimentFlowRunSubmitter(self.client)
# e.g. attributes: {"experiment": xxx, "reference_batch_run_id": xxx}
return exp_node_run_submitter.submit(self, session=self.context.session, attributes=self.node_context)
def _run_command_node(self):
logger.debug(f"Creating command run {self.name}")
exp_command_submitter = ExperimentCommandSubmitter(self.run_operations)
return exp_command_submitter.submit(self)
def _run_chat_group_node(self):
raise NotImplementedError("Chat group node in experiment is not supported yet.")
def update_exp_run_node(self, status):
node_run = ORMExperimentNodeRun(
run_id=self.name,
snapshot_id=self.snapshot_id,
node_name=self.node.name,
experiment_name=self.experiment.name,
status=status,
)
ORMExperimentNodeRun.create_or_update(node_run)
def submit(self):
# Get snapshot id from exp_node_run
node_run = ORMExperimentNodeRun.get_completed_node_by_snapshot_id(
snapshot_id=self.snapshot_id, experiment_name=self.experiment.name, raise_error=False
)
if node_run and node_run.run_id and node_run.status == ExperimentNodeRunStatus.COMPLETED:
run_info = self.run_operations.get(node_run.run_id)
output_path = run_info.properties.get("output_path", None)
if output_path and Path(output_path).exists():
# TODO Whether need to link used node output folder in the experiment run folder
logger.info(f"Reuse existing node run {run_info.name} for node {self.node.name}.")
run_info.name = self.name
return run_info
# Update exp node run record
self.update_exp_run_node(status=ExperimentNodeRunStatus.IN_PROGRESS)
node_run_result = self._run_node()
logger.info(f"Node {self.node.name} run {self.name} completed, outputs to {node_run_result._output_path}.")
return node_run_result
class ExperimentTemplateContext:
def __init__(self, template: ExperimentTemplate, environment_variables=None, session=None):
"""Context for experiment template.
:param template: Template object to get definition of experiment.
:param environment_variables: Environment variables specified for test.
:param session: The session id for the test trace.
"""
self.template = template
self.environment_variables = environment_variables or {}
self._experiment_context = self._get_experiment_context()
# Generate line run id for node
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
self.node_name_to_id = {node.name: f"{node.name}_attempt{timestamp}" for node in template.nodes}
self.node_name_to_referenced_id = self._prepare_referenced_ids()
# All run/line run in experiment should use same session
self.session = session or str(uuid.uuid4())
def _prepare_referenced_ids(self):
"""Change name: [referenced_name] to name: [referenced_id]."""
edges = ExperimentHelper.get_experiment_node_edges(self.template.nodes, flow_only=True)
# Remove non flow node
# Calculate root parent for each node
node_parent = {node.name: node.name for node in self.template.nodes}
def _find_root(node_name):
if node_parent[node_name] != node_name:
node_parent[node_name] = _find_root(node_parent[node_name])
return node_parent[node_name]
def _union(node_name1, node_name2):
root1, root2 = _find_root(node_name1), _find_root(node_name2)
if root1 != root2:
node_parent[root1] = root2
# Union node by edges, e.g. edge: eval: [main]
for node_name, referenced_names in edges.items():
for referenced_name in referenced_names:
_union(node_name, referenced_name)
result = {
name: [self.node_name_to_id[_find_root(referenced_name)] for referenced_name in edges]
for name, edges in edges.items()
}
logger.debug(f"Resolved node name to id mapping: {self.node_name_to_id}, referenced id mapping {result}.")
return result
def _get_experiment_context(self):
"""Get the experiment context required for trace."""
if not self.template._source_path:
return {}
return {ContextAttributeKey.EXPERIMENT: Path(self.template._source_path).resolve().absolute().as_posix()}
def get_node_context(self, node_name, is_flow, test=False):
"""Get the context for a node."""
node_context = {**self._experiment_context}
referenced_key = (
ContextAttributeKey.REFERENCED_LINE_RUN_ID if test else ContextAttributeKey.REFERENCED_BATCH_RUN_ID
)
referenced_ids = self.node_name_to_referenced_id.get(node_name, [])
# Add reference context only for flow node
if referenced_ids and is_flow:
node_context[referenced_key] = next(iter(referenced_ids))
if not test:
# Return node context dict directly and will be set as trace attribute
return node_context
# Return the full json context for test
global_context = os.environ.get(PF_TRACE_CONTEXT)
# Expected global context: {"endpoint": "..", "attributes": {..}}
global_context = json.loads(global_context) if global_context else {"endpoint": "", "attributes": {}}
global_context["attributes"].update(node_context)
return {PF_TRACE_CONTEXT: json.dumps(global_context)}
class ExperimentTemplateTestContext(ExperimentTemplateContext):
def __init__(
self, template: ExperimentTemplate, inputs=None, environment_variables=None, output_path=None, session=None
):
"""
Test context for experiment template.
:param template: Template object to get definition of experiment.
:param inputs: User inputs when calling test command.
:param environment_variables: Environment variables specified for test.
:param output_path: The custom output path.
:param session: The session id for the test trace.
"""
super().__init__(template, environment_variables=environment_variables, session=session)
self.node_results = {} # E.g. {'main': {'category': 'xx', 'evidence': 'xx'}}
self.node_inputs = {} # E.g. {'main': {'url': 'https://abc'}}
self.test_data = ExperimentHelper.prepare_test_data(inputs, template)
self.test_inputs = {input.name: input.default for input in template.inputs}
# TODO: Update session part after test session is supported
if output_path:
self.output_path = Path(output_path)
else:
self.output_path = (
Path(tempfile.gettempdir()) / PROMPT_FLOW_DIR_NAME / "sessions/default" / template.dir_name
)
def add_node_inputs(self, name, inputs):
self.node_inputs[name] = inputs
def add_node_result(self, name, result):
self.node_results[name] = result
class ExperimentHelper:
@staticmethod
def prepare_test_data(inputs, template: ExperimentTemplate) -> dict:
"""Prepare test data.
If inputs is given, use it for all test data.
Else, read the first line of template data path for test."""
template_test_data = {}
for data in template.data:
data_line = inputs or next(iter(load_data(local_path=data.path)), None)
if not data_line:
raise ExperimentValueError(f"Experiment data {data.name!r} is empty.")
template_test_data[data.name] = data_line
return template_test_data
@staticmethod
def get_referenced_data_and_run(
node_name: str, column_mapping: dict, experiment_data: dict, experiment_runs: dict
) -> tuple:
"""Get the node referenced data and runs from dict."""
data, run = {}, {}
for value in column_mapping.values():
if not isinstance(value, str):
continue
if value.startswith("${data."):
name = value.split(".")[1].replace("}", "")
if name not in experiment_data:
raise ExperimentValueError(
f"Node {node_name!r} inputs {value!r} related experiment data {name!r} not found."
)
data[name] = experiment_data[name]
elif value.startswith("${"):
name = value.split(".")[0].replace("${", "")
if name not in experiment_runs:
raise ExperimentValueError(
f"Node {node_name!r} inputs {value!r} related experiment run {name!r} not found."
)
run[name] = experiment_runs[name]
return data, run
@staticmethod
def resolve_column_mapping(node_name: str, column_mapping: dict, experiment_inputs: dict):
"""Resolve column mapping with experiment inputs to constant values."""
logger.info(f"Start resolve node {node_name!r} column mapping.")
if not column_mapping:
return {}
resolved_mapping = {}
for name, value in column_mapping.items():
if not isinstance(value, str) or not value.startswith("${inputs."):
resolved_mapping[name] = value
continue
input_name = value.split(".")[1].replace("}", "")
if input_name not in experiment_inputs:
raise ExperimentValueError(
f"Node {node_name!r} inputs {value!r} related experiment input {input_name!r} not found."
)
resolved_mapping[name] = experiment_inputs[input_name].default
logger.debug(f"Resolved node {node_name!r} column mapping {resolved_mapping}.")
return resolved_mapping
@staticmethod
def _is_node_reference(value):
"""Check if value is a node reference."""
return (
isinstance(value, str)
and value.startswith("${")
and not value.startswith("${data.")
and not value.startswith("${inputs.")
)
@staticmethod
def _prepare_single_node_edges(node):
"""Prepare single node name to referenced node name edges mapping."""
node_names = set()
for input_value in node.inputs.values():
if not isinstance(input_value, str):
continue
if ExperimentHelper._is_node_reference(input_value):
referenced_node_name = input_value.split(".")[0].replace("${", "")
node_names.add(referenced_node_name)
return node_names
@staticmethod
def get_experiment_node_edges(nodes, flow_only=False):
"""Get experiment node edges mapping."""
edges = {node.name: ExperimentHelper._prepare_single_node_edges(node) for node in nodes}
if flow_only:
nodes_to_remove = [node for node in nodes if node.type != ExperimentNodeType.FLOW]
ExperimentHelper._remove_nodes_from_active_edges(nodes_to_remove, edges)
return edges
@staticmethod
def _remove_nodes_from_active_edges(nodes_to_remove, edges):
for node in nodes_to_remove:
for referenced_nodes in edges.values():
referenced_nodes.discard(node.name)
del edges[node.name]
@staticmethod
def resolve_nodes_to_execute(experiment, start_nodes=None):
"""Resolve node to execute and ensure nodes order in experiment."""
def _can_remove_node(node):
# No start nodes specified, no edge linked, then node is available.
if not start_nodes:
if node.name in active_edges and len(active_edges[node.name]) == 0:
return True
return False
# Start nodes specified, successor nodes of resolved nodes are available, edges are required.
if node.name not in active_edges or len(edges[node.name]) == 0:
return False
# All predecessor nodes are resolved, then node is available.
if all(referenced_node not in active_edges for referenced_node in edges[node.name]):
return True
return False
# Perform topological sort to ensure nodes order
nodes = experiment.nodes
resolved_nodes, start_nodes = [], start_nodes or []
edges = ExperimentHelper.get_experiment_node_edges(nodes)
active_edges = copy.deepcopy(edges)
# If start nodes specified, preprocessing them.
ExperimentHelper._remove_nodes_from_active_edges(start_nodes, active_edges)
resolved_nodes.extend(start_nodes)
logger.debug(f"Experiment start node {[node.name for node in start_nodes]}, nodes edges: {active_edges!r}")
while True:
available_nodes = [node for node in nodes if _can_remove_node(node)]
logger.debug(f"Experiment available nodes: {[node.name for node in available_nodes]!r}")
if not available_nodes:
break
ExperimentHelper._remove_nodes_from_active_edges(available_nodes, active_edges)
resolved_nodes.extend(available_nodes)
# If no start nodes specified, all nodes should be visited.
if not start_nodes and len(resolved_nodes) != len(nodes):
raise ExperimentHasCycle(f"Experiment has circular dependency {active_edges!r}")
logger.debug(f"Experiment nodes resolved: {[node.name for node in resolved_nodes]}")
return resolved_nodes
@staticmethod
def resolve_binding_from_run(run_name, run, run_operations) -> dict:
"""Return the valid binding dict based on a run."""
binding_dict = {
# to align with cloud behavior, run.inputs should refer to original data
f"{run_name}.inputs": run_operations._get_data_path(run),
}
# Update command node outputs
if run._outputs:
binding_dict.update({f"{run_name}.outputs.{name}": path for name, path in run._outputs.items()})
else:
binding_dict.update({f"{run_name}.outputs": run_operations._get_outputs_path(run)})
logger.debug(f"Resolved node {run_name} binding inputs {binding_dict}.")
return binding_dict
class ExperimentFlowRunSubmitter(RunSubmitter):
"""Experiment run submitter, override some function from RunSubmitter as experiment run could be different."""
@classmethod
def _validate_inputs(cls, run: Run):
# Do not validate run/data field, as we will resolve them in _resolve_input_dirs.
return
def _resolve_input_dirs(self, run: ExperimentNodeRun):
return run._input_data
def submit(self, run: ExperimentNodeRun, stream=False, **kwargs):
try:
run.update_exp_run_node(ExperimentNodeRunStatus.IN_PROGRESS)
self._run_bulk(run=run, stream=stream, **kwargs)
run_info = self.run_operations.get(name=run.name)
run.update_exp_run_node(run_info.status)
return run_info
except Exception as e:
run.update_exp_run_node(ExperimentNodeRunStatus.FAILED)
raise e
class ExperimentCommandSubmitter:
"""Experiment command submitter, responsible for experiment command running."""
def __init__(self, run_operations: RunOperations):
self.run_operations = run_operations
def submit(self, run: ExperimentNodeRun, **kwargs):
"""Submit an experiment command run.
:param run: Experiment command to submit.
:type run: ~promptflow.entities.Run
"""
local_storage = LocalStorageOperations(run, run_mode=RunMode.SingleNode)
self._submit_command_run(run=run, local_storage=local_storage)
return self.run_operations.get(name=run.name)
def _resolve_inputs(self, run: ExperimentNodeRun):
"""Resolve binding inputs to constant values."""
# e.g. "input_path": "${data.my_data}" -> "${inputs.input_path}": "real_data_path"
logger.info("Start resolve node %s inputs.", run.node.name)
logger.debug(f"Resolved node {run.node.name} binding inputs {run._input_data}.")
# resolve inputs
resolved_inputs = {}
for name, value in run.column_mapping.items():
if not isinstance(value, str) or not value.startswith("${"):
resolved_inputs[name] = value
continue
# my_input: "${run.outputs}" -> my_input: run_outputs_path
input_key = value.lstrip("${").rstrip("}")
if input_key in run._input_data:
resolved_inputs[name] = run._input_data[input_key]
continue
logger.warning(
f"Possibly invalid partial input value binding {value!r} found for node {run.node.name!r}. "
"Only full binding is supported for command node. For example: ${data.my_data}, ${main_node.outputs}."
)
resolved_inputs[name] = value
logger.debug(f"Resolved node {run.node.name} inputs {resolved_inputs}.")
return resolved_inputs
def _resolve_outputs(self, run: ExperimentNodeRun):
"""Resolve outputs to real path."""
# e.g. "output_path": "${outputs.my_output}" -> "${outputs.output_path}": "real_output_path"
logger.info("Start resolve node %s outputs.", run.node.name)
# resolve outputs
resolved_outputs = {}
for name, value in run._outputs.items():
# Set default output path if user doesn't set it
if not value:
# Create default output path if user doesn't set it
value = run._output_path / name
value.mkdir(parents=True, exist_ok=True)
value = value.resolve().absolute().as_posix()
# Update default to run
run._outputs[name] = value
# Note: We will do nothing if user config the value, as we don't know it's a file or folder
resolved_outputs[name] = value
logger.debug(f"Resolved node {run.node.name} outputs {resolved_outputs}.")
return resolved_outputs
def _resolve_command(self, run: ExperimentNodeRun, inputs: dict, outputs: dict):
"""Resolve command to real command."""
logger.info("Start resolve node %s command.", run.node.name)
# resolve command
resolved_command = run._command
# replace inputs
for name, value in inputs.items():
resolved_command = resolved_command.replace(f"${{inputs.{name}}}", str(value))
# replace outputs
for name, value in outputs.items():
resolved_command = resolved_command.replace(f"${{outputs.{name}}}", str(value))
logger.debug(f"Resolved node {run.node.name} command {resolved_command}.")
if "${" in resolved_command:
logger.warning(
f"Possibly unresolved command value binding found for node {run.node.name!r}. "
f"Resolved command: {resolved_command}. Please check your command again."
)
return resolved_command
def _submit_command_run(self, run: ExperimentNodeRun, local_storage: LocalStorageOperations) -> dict:
# resolve environment variables
SubmitterHelper.resolve_environment_variables(environment_variables=run.environment_variables)
SubmitterHelper.init_env(environment_variables=run.environment_variables)
# resolve inputs & outputs for command preparing
# e.g. input_path: ${data.my_data} -> ${inputs.input_path}: real_data_path
inputs = self._resolve_inputs(run)
outputs = self._resolve_outputs(run)
# replace to command
command = self._resolve_command(run, inputs, outputs)
# execute command
status = Status.Failed.value
# create run to db when fully prepared to run in executor, otherwise won't create it
run._dump() # pylint: disable=protected-access
try:
return_code = ExperimentCommandExecutor.run(
command=command, cwd=run.flow, log_path=local_storage.logger.file_path
)
if return_code != 0:
raise ExperimentCommandRunError(
f"Run {run.name} failed with return code {return_code}, "
f"please check out {run.properties[FlowRunProperties.OUTPUT_PATH]} for more details."
)
status = Status.Completed.value
except Exception as e:
# when run failed in executor, store the exception in result and dump to file
logger.warning(f"Run {run.name} failed when executing in executor with exception {e}.")
# for user error, swallow stack trace and return failed run since user don't need the stack trace
if not isinstance(e, UserErrorException):
# for other errors, raise it to user to help debug root cause.
raise e
finally:
self.run_operations.update(
name=run.name,
status=status,
end_time=datetime.now(),
)
class ExperimentCommandExecutor:
"""Experiment command executor, responsible for experiment command running."""
@staticmethod
def run(command: str, cwd: str, log_path: Path):
"""Start a subprocess to run the command"""
logger.info(f"Start running command {command}, log path: {log_path}")
with open(log_path, "w") as log_file:
process = subprocess.Popen(command, stdout=log_file, stderr=log_file, shell=True, env=os.environ, cwd=cwd)
process.wait()
return process.returncode
def add_start_orchestrator_action(subparsers):
"""Add action to start orchestrator."""
start_orchestrator_parser = subparsers.add_parser(
"start",
description="Start orchestrator.",
)
start_orchestrator_parser.add_argument("--experiment", type=str, help="Experiment name")
start_orchestrator_parser.add_argument("--nodes", type=str, help="Nodes to be executed", nargs="+")
start_orchestrator_parser.add_argument("--from-nodes", type=str, help="Nodes branch to be executed", nargs="+")
start_orchestrator_parser.set_defaults(action="start")
if __name__ == "__main__":
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="Orchestrator operations",
)
subparsers = parser.add_subparsers()
add_start_orchestrator_action(subparsers)
args = args = parser.parse_args(sys.argv[1:])
if args.action == "start":
from promptflow._sdk._pf_client import PFClient
client = PFClient()
experiment = client._experiments.get(args.experiment)
ExperimentOrchestrator(client, experiment=experiment).start(nodes=args.nodes, from_nodes=args.from_nodes)
| promptflow/src/promptflow/promptflow/_sdk/_submitter/experiment_orchestrator.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_submitter/experiment_orchestrator.py",
"repo_id": "promptflow",
"token_count": 21786
} | 15 |
Exported Dockerfile & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files
- ...
- connections: the folder contains yaml files to create all related connections
- ...
- runit: the folder contains all the runit scripts
- ...
- Dockerfile: the dockerfile to build the image
- start.sh: the script used in `CMD` of `Dockerfile` to start the service
- settings.json: a json file to store the settings of the docker image
- README.md: the readme file to describe how to use the dockerfile
Please refer to [official doc](https://microsoft.github.io/promptflow/how-to-guides/deploy-and-export-a-flow.html#export-a-flow)
for more details about how to use the exported dockerfile and scripts.
| promptflow/src/promptflow/promptflow/_sdk/data/docker/README.md/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/docker/README.md",
"repo_id": "promptflow",
"token_count": 206
} | 16 |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Tool",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"type": {
"$ref": "#/definitions/ToolType"
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/InputDefinition"
}
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/OutputDefinition"
}
},
"description": {
"type": "string"
},
"connection_type": {
"type": "array",
"items": {
"$ref": "#/definitions/ConnectionType"
}
},
"module": {
"type": "string"
},
"class_name": {
"type": "string"
},
"source": {
"type": "string"
},
"LkgCode": {
"type": "string"
},
"code": {
"type": "string"
},
"function": {
"type": "string"
},
"action_type": {
"type": "string"
},
"provider_config": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/InputDefinition"
}
},
"function_config": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/InputDefinition"
}
},
"icon": {},
"category": {
"type": "string"
},
"tags": {
"type": "object",
"additionalProperties": {}
},
"is_builtin": {
"type": "boolean"
},
"package": {
"type": "string"
},
"package_version": {
"type": "string"
},
"default_prompt": {
"type": "string"
},
"enable_kwargs": {
"type": "boolean"
},
"deprecated_tools": {
"type": "array",
"items": {
"type": "string"
}
},
"tool_state": {
"$ref": "#/definitions/ToolState"
}
},
"definitions": {
"ToolType": {
"type": "string",
"description": "",
"x-enumNames": [
"Llm",
"Python",
"Action",
"Prompt",
"CustomLLM",
"CSharp"
],
"enum": [
"llm",
"python",
"action",
"prompt",
"custom_llm",
"csharp"
]
},
"InputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"type": {
"type": "array",
"items": {
"$ref": "#/definitions/ValueType"
}
},
"default": {},
"description": {
"type": "string"
},
"enum": {
"type": "array",
"items": {
"type": "string"
}
},
"enabled_by": {
"type": "string"
},
"enabled_by_type": {
"type": "array",
"items": {
"$ref": "#/definitions/ValueType"
}
},
"enabled_by_value": {
"type": "array",
"items": {}
},
"model_list": {
"type": "array",
"items": {
"type": "string"
}
},
"capabilities": {
"$ref": "#/definitions/AzureOpenAIModelCapabilities"
},
"dynamic_list": {
"$ref": "#/definitions/ToolInputDynamicList"
},
"allow_manual_entry": {
"type": "boolean"
},
"is_multi_select": {
"type": "boolean"
},
"generated_by": {
"$ref": "#/definitions/ToolInputGeneratedBy"
},
"input_type": {
"$ref": "#/definitions/InputType"
},
"advanced": {
"type": [
"boolean",
"null"
]
},
"ui_hints": {
"type": "object",
"additionalProperties": {}
}
}
},
"ValueType": {
"type": "string",
"description": "",
"x-enumNames": [
"Int",
"Double",
"Bool",
"String",
"Secret",
"PromptTemplate",
"Object",
"List",
"BingConnection",
"OpenAIConnection",
"AzureOpenAIConnection",
"AzureContentModeratorConnection",
"CustomConnection",
"AzureContentSafetyConnection",
"SerpConnection",
"CognitiveSearchConnection",
"SubstrateLLMConnection",
"PineconeConnection",
"QdrantConnection",
"WeaviateConnection",
"FunctionList",
"FunctionStr",
"FormRecognizerConnection",
"FilePath",
"Image"
],
"enum": [
"int",
"double",
"bool",
"string",
"secret",
"prompt_template",
"object",
"list",
"BingConnection",
"OpenAIConnection",
"AzureOpenAIConnection",
"AzureContentModeratorConnection",
"CustomConnection",
"AzureContentSafetyConnection",
"SerpConnection",
"CognitiveSearchConnection",
"SubstrateLLMConnection",
"PineconeConnection",
"QdrantConnection",
"WeaviateConnection",
"function_list",
"function_str",
"FormRecognizerConnection",
"file_path",
"image"
]
},
"AzureOpenAIModelCapabilities": {
"type": "object",
"properties": {
"completion": {
"type": [
"boolean",
"null"
]
},
"chat_completion": {
"type": [
"boolean",
"null"
]
},
"embeddings": {
"type": [
"boolean",
"null"
]
}
}
},
"ToolInputDynamicList": {
"type": "object",
"properties": {
"func_path": {
"type": "string"
},
"func_kwargs": {
"type": "array",
"description": "Sample value in yaml\nfunc_kwargs:\n- name: prefix # Argument name to be passed to the function\n type: \n - string\n # if optional is not specified, default to false.\n # this is for UX pre-validaton. If optional is false, but no input. UX can throw error in advanced.\n optional: true\n reference: ${inputs.index_prefix} # Dynamic reference to another input parameter\n- name: size # Another argument name to be passed to the function\n type: \n - int\n optional: true\n default: 10",
"items": {
"type": "object",
"additionalProperties": {}
}
}
}
},
"ToolInputGeneratedBy": {
"type": "object",
"properties": {
"func_path": {
"type": "string"
},
"func_kwargs": {
"type": "array",
"description": "Sample value in yaml\nfunc_kwargs:\n- name: index_type # Argument name to be passed to the function\n type: \n - string\n optional: true\n reference: ${inputs.index_type} # Dynamic reference to another input parameter\n- name: index # Another argument name to be passed to the function\n type: \n - string\n optional: true\n reference: ${inputs.index}",
"items": {
"type": "object",
"additionalProperties": {}
}
},
"reverse_func_path": {
"type": "string"
}
}
},
"InputType": {
"type": "string",
"description": "",
"x-enumNames": [
"Default",
"UIOnly_Hidden"
],
"enum": [
"default",
"uionly_hidden"
]
},
"OutputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"type": {
"type": "array",
"items": {
"$ref": "#/definitions/ValueType"
}
},
"description": {
"type": "string"
},
"isProperty": {
"type": "boolean"
}
}
},
"ConnectionType": {
"type": "string",
"description": "",
"x-enumNames": [
"OpenAI",
"AzureOpenAI",
"Serp",
"Bing",
"AzureContentModerator",
"Custom",
"AzureContentSafety",
"CognitiveSearch",
"SubstrateLLM",
"Pinecone",
"Qdrant",
"Weaviate",
"FormRecognizer"
],
"enum": [
"OpenAI",
"AzureOpenAI",
"Serp",
"Bing",
"AzureContentModerator",
"Custom",
"AzureContentSafety",
"CognitiveSearch",
"SubstrateLLM",
"Pinecone",
"Qdrant",
"Weaviate",
"FormRecognizer"
]
},
"ToolState": {
"type": "string",
"description": "",
"x-enumNames": [
"Stable",
"Preview",
"Deprecated"
],
"enum": [
"stable",
"preview",
"deprecated"
]
}
}
}
| promptflow/src/promptflow/promptflow/_sdk/data/tool.schema.json/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/tool.schema.json",
"repo_id": "promptflow",
"token_count": 4791
} | 17 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
from typing import List, Optional, Union
from promptflow._sdk._constants import MAX_LIST_CLI_RESULTS, ExperimentStatus, ListViewType
from promptflow._sdk._errors import ExperimentExistsError, ExperimentValueError, RunOperationError
from promptflow._sdk._orm.experiment import Experiment as ORMExperiment
from promptflow._sdk._telemetry import ActivityType, TelemetryMixin, monitor_operation
from promptflow._sdk._utils import safe_parse_object_list
from promptflow._sdk.entities._experiment import Experiment
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
class ExperimentOperations(TelemetryMixin):
"""ExperimentOperations."""
def __init__(self, client, **kwargs):
super().__init__(**kwargs)
self._client = client
@monitor_operation(activity_name="pf.experiment.list", activity_type=ActivityType.PUBLICAPI)
def list(
self,
max_results: Optional[int] = MAX_LIST_CLI_RESULTS,
*,
list_view_type: ListViewType = ListViewType.ACTIVE_ONLY,
) -> List[Experiment]:
"""List experiments.
:param max_results: Max number of results to return. Default: 50.
:type max_results: Optional[int]
:param list_view_type: View type for including/excluding (for example) archived experiments.
Default: ACTIVE_ONLY.
:type list_view_type: Optional[ListViewType]
:return: List of experiment objects.
:rtype: List[~promptflow.entities.Experiment]
"""
orm_experiments = ORMExperiment.list(max_results=max_results, list_view_type=list_view_type)
return safe_parse_object_list(
obj_list=orm_experiments,
parser=Experiment._from_orm_object,
message_generator=lambda x: f"Error parsing experiment {x.name!r}, skipped.",
)
@monitor_operation(activity_name="pf.experiment.get", activity_type=ActivityType.PUBLICAPI)
def get(self, name: str) -> Experiment:
"""Get an experiment entity.
:param name: Name of the experiment.
:type name: str
:return: experiment object retrieved from the database.
:rtype: ~promptflow.entities.Experiment
"""
from promptflow._sdk._submitter.experiment_orchestrator import ExperimentOrchestrator
ExperimentOrchestrator.get_status(name)
return Experiment._from_orm_object(ORMExperiment.get(name))
@monitor_operation(activity_name="pf.experiment.create_or_update", activity_type=ActivityType.PUBLICAPI)
def create_or_update(self, experiment: Experiment, **kwargs) -> Experiment:
"""Create or update an experiment.
:param experiment: Experiment object to create or update.
:type experiment: ~promptflow.entities.Experiment
:return: Experiment object created or updated.
:rtype: ~promptflow.entities.Experiment
"""
orm_experiment = experiment._to_orm_object()
try:
orm_experiment.dump()
return self.get(experiment.name)
except ExperimentExistsError:
logger.info(f"Experiment {experiment.name!r} already exists, updating.")
existing_experiment = orm_experiment.get(experiment.name)
existing_experiment.update(
status=orm_experiment.status,
description=orm_experiment.description,
last_start_time=orm_experiment.last_start_time,
last_end_time=orm_experiment.last_end_time,
node_runs=orm_experiment.node_runs,
)
return self.get(experiment.name)
@monitor_operation(activity_name="pf.experiment.start", activity_type=ActivityType.PUBLICAPI)
def start(self, name: str, **kwargs) -> Experiment:
"""Start an experiment.
:param name: Experiment name.
:type name: str
:return: Experiment object started.
:rtype: ~promptflow.entities.Experiment
"""
from promptflow._sdk._submitter.experiment_orchestrator import ExperimentOrchestrator
if not isinstance(name, str):
raise ExperimentValueError(f"Invalid type {type(name)} for name. Must be str.")
experiment = self.get(name)
if experiment.status in [ExperimentStatus.QUEUING, ExperimentStatus.IN_PROGRESS]:
raise RunOperationError(
f"Experiment {experiment.name} is {experiment.status}, cannot be started repeatedly."
)
return ExperimentOrchestrator(self._client, experiment).async_start(**kwargs)
@monitor_operation(activity_name="pf.experiment.stop", activity_type=ActivityType.PUBLICAPI)
def stop(self, name: str, **kwargs) -> Experiment:
"""Stop an experiment.
:param name: Experiment name.
:type name: str
:return: Experiment object started.
:rtype: ~promptflow.entities.Experiment
"""
from promptflow._sdk._submitter.experiment_orchestrator import ExperimentOrchestrator
if not isinstance(name, str):
raise ExperimentValueError(f"Invalid type {type(name)} for name. Must be str.")
ExperimentOrchestrator(self._client, self.get(name)).stop()
return self.get(name)
def _test(
self, flow: Union[Path, str], experiment: Union[Path, str], inputs=None, environment_variables=None, **kwargs
):
"""Test flow in experiment.
:param flow: Flow dag yaml file path.
:type flow: Union[Path, str]
:param experiment: Experiment yaml file path.
:type experiment: Union[Path, str]
:param inputs: Input parameters for flow.
:type inputs: dict
:param environment_variables: Environment variables for flow.
:type environment_variables: dict
"""
from .._load_functions import _load_experiment_template
from .._submitter.experiment_orchestrator import ExperimentOrchestrator
experiment_template = _load_experiment_template(experiment)
output_path = kwargs.get("output_path", None)
session = kwargs.get("session", None)
return ExperimentOrchestrator(client=self._client, experiment=None).test(
flow,
experiment_template,
inputs,
environment_variables,
output_path=output_path,
session=session,
)
| promptflow/src/promptflow/promptflow/_sdk/operations/_experiment_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/_experiment_operations.py",
"repo_id": "promptflow",
"token_count": 2536
} | 18 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
import typing
import uuid
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.environment_variables import OTEL_EXPORTER_OTLP_ENDPOINT
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from promptflow._constants import (
OTEL_RESOURCE_SERVICE_NAME,
ResourceAttributeFieldName,
SpanAttributeFieldName,
TraceEnvironmentVariableName,
)
from promptflow._core.openai_injector import inject_openai_api
from promptflow._core.operation_context import OperationContext
from promptflow._sdk._constants import PF_TRACE_CONTEXT
from promptflow._sdk._service.utils.utils import is_pfs_service_healthy
from promptflow._utils.logger_utils import get_cli_sdk_logger
_logger = get_cli_sdk_logger()
def start_trace(*, session: typing.Optional[str] = None, **kwargs):
"""Start a tracing session.
This will capture OpenAI and prompt flow related calls and persist traces;
it will also provide a UI url for user to visualize traces details.
Note that this function is still under preview, and may change at any time.
"""
from promptflow._sdk._constants import ContextAttributeKey
from promptflow._sdk._service.utils.utils import get_port_from_config
pfs_port = get_port_from_config(create_if_not_exists=True)
_start_pfs(pfs_port)
_logger.debug("PFS is serving on port %s", pfs_port)
session_id = _provision_session_id(specified_session_id=session)
_logger.debug("current session id is %s", session_id)
operation_context = OperationContext.get_instance()
# honor and set attributes if user has specified
attributes: dict = kwargs.get("attributes", None)
if attributes is not None:
for attr_key, attr_value in attributes.items():
operation_context._add_otel_attributes(attr_key, attr_value)
# prompt flow related, retrieve `experiment` and `referenced.line_run_id`
env_trace_context = os.environ.get(PF_TRACE_CONTEXT, None)
_logger.debug("Read trace context from environment: %s", env_trace_context)
env_attributes = json.loads(env_trace_context).get("attributes") if env_trace_context else {}
experiment = env_attributes.get(ContextAttributeKey.EXPERIMENT, None)
ref_line_run_id = env_attributes.get(ContextAttributeKey.REFERENCED_LINE_RUN_ID, None)
if ref_line_run_id is not None:
operation_context._add_otel_attributes(SpanAttributeFieldName.REFERENCED_LINE_RUN_ID, ref_line_run_id)
# init the global tracer with endpoint
_init_otel_trace_exporter(otlp_port=pfs_port, session_id=session_id, experiment=experiment)
# openai instrumentation
inject_openai_api()
# print user the UI url
ui_url = f"http://localhost:{pfs_port}/v1.0/ui/traces?session={session_id}"
# print to be able to see it in notebook
print(f"You can view the trace from UI url: {ui_url}")
def _start_pfs(pfs_port) -> None:
from promptflow._sdk._service.entry import entry
from promptflow._sdk._service.utils.utils import is_port_in_use
command_args = ["start", "--port", str(pfs_port)]
if is_port_in_use(pfs_port):
_logger.warning(f"Service port {pfs_port} is used.")
if is_pfs_service_healthy(pfs_port) is True:
_logger.info(f"Service is already running on port {pfs_port}.")
return
else:
command_args += ["--force"]
entry(command_args)
def _is_tracer_provider_configured() -> bool:
# if tracer provider is configured, `tracer_provider` should be an instance of `TracerProvider`;
# otherwise, it should be an instance of `ProxyTracerProvider`
tracer_provider = trace.get_tracer_provider()
return isinstance(tracer_provider, TracerProvider)
def _provision_session_id(specified_session_id: typing.Optional[str]) -> str:
# check if session id is configured in tracer provider
configured_session_id = None
if _is_tracer_provider_configured():
tracer_provider: TracerProvider = trace.get_tracer_provider()
configured_session_id = tracer_provider._resource.attributes[ResourceAttributeFieldName.SESSION_ID]
if specified_session_id is None and configured_session_id is None:
# user does not specify and not configured, provision a new one
session_id = str(uuid.uuid4())
elif specified_session_id is None and configured_session_id is not None:
# user does not specify, but already configured, use the configured one
session_id = configured_session_id
elif specified_session_id is not None and configured_session_id is None:
# user specified, but not configured, use the specified one
session_id = specified_session_id
else:
# user specified while configured, log warnings and honor the configured one
session_id = configured_session_id
warning_message = (
f"Session is already configured with id: {session_id!r}, "
"we will honor it within current process; "
"if you expect another session, please specify it in another process."
)
_logger.warning(warning_message)
return session_id
def _create_resource(session_id: str, experiment: typing.Optional[str] = None) -> Resource:
resource_attributes = {
ResourceAttributeFieldName.SERVICE_NAME: OTEL_RESOURCE_SERVICE_NAME,
ResourceAttributeFieldName.SESSION_ID: session_id,
}
if experiment is not None:
resource_attributes[ResourceAttributeFieldName.EXPERIMENT_NAME] = experiment
return Resource(attributes=resource_attributes)
def setup_exporter_from_environ() -> None:
if _is_tracer_provider_configured():
_logger.debug("tracer provider is already configured, skip setting up again.")
return
# get resource values from environment variables and create resource
session_id = os.getenv(TraceEnvironmentVariableName.SESSION_ID)
experiment = os.getenv(TraceEnvironmentVariableName.EXPERIMENT, None)
resource = _create_resource(session_id=session_id, experiment=experiment)
tracer_provider = TracerProvider(resource=resource)
# get OTLP endpoint from environment variable
endpoint = os.getenv(OTEL_EXPORTER_OTLP_ENDPOINT)
otlp_span_exporter = OTLPSpanExporter(endpoint=endpoint)
tracer_provider.add_span_processor(BatchSpanProcessor(otlp_span_exporter))
trace.set_tracer_provider(tracer_provider)
def _init_otel_trace_exporter(otlp_port: str, session_id: str, experiment: typing.Optional[str] = None) -> None:
endpoint = f"http://localhost:{otlp_port}/v1/traces"
os.environ[OTEL_EXPORTER_OTLP_ENDPOINT] = endpoint
os.environ[TraceEnvironmentVariableName.SESSION_ID] = session_id
if experiment is not None:
os.environ[TraceEnvironmentVariableName.EXPERIMENT] = experiment
setup_exporter_from_environ()
| promptflow/src/promptflow/promptflow/_trace/_start_trace.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_trace/_start_trace.py",
"repo_id": "promptflow",
"token_count": 2488
} | 19 |
import base64
import os
import re
import uuid
from functools import partial
from pathlib import Path
from typing import Any, Callable, Dict
from urllib.parse import urlparse
import requests
from promptflow._utils._errors import InvalidImageInput, LoadMultimediaDataError
from promptflow.contracts.flow import FlowInputDefinition
from promptflow.contracts.multimedia import Image, PFBytes
from promptflow.contracts.tool import ValueType
from promptflow.exceptions import ErrorTarget
MIME_PATTERN = re.compile(r"^data:image/(.*);(path|base64|url)$")
def _get_extension_from_mime_type(mime_type: str):
ext = mime_type.split("/")[-1]
if ext == "*":
return None
return ext
def is_multimedia_dict(multimedia_dict: dict):
if len(multimedia_dict) != 1:
return False
key = list(multimedia_dict.keys())[0]
if re.match(MIME_PATTERN, key):
return True
return False
def _get_multimedia_info(key: str):
match = re.match(MIME_PATTERN, key)
if match:
return match.group(1), match.group(2)
return None, None
def _is_url(value: str):
try:
result = urlparse(value)
return all([result.scheme, result.netloc])
except ValueError:
return False
def _is_base64(value: str):
base64_regex = re.compile(r"^([A-Za-z0-9+/]{4})*(([A-Za-z0-9+/]{2})*(==|[A-Za-z0-9+/]=)?)?$")
if re.match(base64_regex, value):
return True
return False
def _create_image_from_file(f: Path, mime_type: str = None):
with open(f, "rb") as fin:
return Image(fin.read(), mime_type=mime_type)
def _create_image_from_base64(base64_str: str, mime_type: str = None):
image_bytes = base64.b64decode(base64_str)
return Image(image_bytes, mime_type=mime_type)
def _create_image_from_url(url: str, mime_type: str = None):
response = requests.get(url)
if response.status_code == 200:
return Image(response.content, mime_type=mime_type, source_url=url)
else:
raise InvalidImageInput(
message_format="Failed to fetch image from URL: {url}. Error code: {error_code}. "
"Error message: {error_message}.",
target=ErrorTarget.EXECUTOR,
url=url,
error_code=response.status_code,
error_message=response.text,
)
def _create_image_from_dict(image_dict: dict):
for k, v in image_dict.items():
format, resource = _get_multimedia_info(k)
if resource == "path":
return _create_image_from_file(Path(v), mime_type=f"image/{format}")
elif resource == "base64":
if _is_base64(v):
return _create_image_from_base64(v, mime_type=f"image/{format}")
else:
raise InvalidImageInput(
message_format=f"Invalid base64 image: {v}.",
target=ErrorTarget.EXECUTOR,
)
elif resource == "url":
return _create_image_from_url(v, mime_type=f"image/{format}")
else:
raise InvalidImageInput(
message_format=f"Unsupported image resource: {resource}. "
"Supported Resources are [path, base64, url].",
target=ErrorTarget.EXECUTOR,
)
def _create_image_from_string(value: str):
if _is_base64(value):
return _create_image_from_base64(value)
elif _is_url(value):
return _create_image_from_url(value)
else:
return _create_image_from_file(Path(value))
def create_image(value: any):
if isinstance(value, PFBytes):
return value
elif isinstance(value, dict):
if is_multimedia_dict(value):
return _create_image_from_dict(value)
else:
raise InvalidImageInput(
message_format="Invalid image input format. The image input should be a dictionary like: "
"{{data:image/<image_type>;[path|base64|url]: <image_data>}}.",
target=ErrorTarget.EXECUTOR,
)
elif isinstance(value, str):
if not value:
raise InvalidImageInput(message_format="The image input should not be empty.", target=ErrorTarget.EXECUTOR)
return _create_image_from_string(value)
else:
raise InvalidImageInput(
message_format=f"Unsupported image input type: {type(value)}. "
"The image inputs should be a string or a dictionary.",
target=ErrorTarget.EXECUTOR,
)
def _save_image_to_file(
image: Image, file_name: str, folder_path: Path, relative_path: Path = None, use_absolute_path=False
):
ext = _get_extension_from_mime_type(image._mime_type)
file_name = f"{file_name}.{ext}" if ext else file_name
image_path = (relative_path / file_name).as_posix() if relative_path else file_name
if use_absolute_path:
image_path = Path(folder_path / image_path).resolve().as_posix()
image_reference = {f"data:{image._mime_type};path": image_path}
path = folder_path / relative_path if relative_path else folder_path
os.makedirs(path, exist_ok=True)
with open(os.path.join(path, file_name), "wb") as file:
file.write(image)
return image_reference
def get_file_reference_encoder(folder_path: Path, relative_path: Path = None, *, use_absolute_path=False) -> Callable:
def pfbytes_file_reference_encoder(obj):
"""Dumps PFBytes to a file and returns its reference."""
if obj.source_url:
return {f"data:{obj._mime_type};url": obj.source_url}
if isinstance(obj, PFBytes):
file_name = str(uuid.uuid4())
# If use_absolute_path is True, the image file path in image dictionary will be absolute path.
return _save_image_to_file(obj, file_name, folder_path, relative_path, use_absolute_path)
raise TypeError(f"Not supported to dump type '{type(obj).__name__}'.")
return pfbytes_file_reference_encoder
def default_json_encoder(obj):
if isinstance(obj, PFBytes):
return str(obj)
else:
raise TypeError(f"Object of type {type(obj).__name__} is not JSON serializable")
def persist_multimedia_data(value: Any, base_dir: Path, sub_dir: Path = None):
pfbytes_file_reference_encoder = get_file_reference_encoder(base_dir, sub_dir)
serialization_funcs = {Image: partial(Image.serialize, **{"encoder": pfbytes_file_reference_encoder})}
return _process_recursively(value, process_funcs=serialization_funcs)
def convert_multimedia_data_to_base64(value: Any, with_type=False, dict_type=False):
to_base64_funcs = {PFBytes: partial(PFBytes.to_base64, **{"with_type": with_type, "dict_type": dict_type})}
return _process_recursively(value, process_funcs=to_base64_funcs)
# TODO: Move this function to a more general place and integrate serialization to this function.
def _process_recursively(value: Any, process_funcs: Dict[type, Callable] = None, inplace: bool = False) -> dict:
if process_funcs:
for cls, f in process_funcs.items():
if isinstance(value, cls):
return f(value)
if isinstance(value, list):
if inplace:
for i in range(len(value)):
value[i] = _process_recursively(value[i], process_funcs, inplace)
else:
return [_process_recursively(v, process_funcs, inplace) for v in value]
elif isinstance(value, dict):
if inplace:
for k, v in value.items():
value[k] = _process_recursively(v, process_funcs, inplace)
else:
return {k: _process_recursively(v, process_funcs, inplace) for k, v in value.items()}
return value
def load_multimedia_data(inputs: Dict[str, FlowInputDefinition], line_inputs: dict):
updated_inputs = dict(line_inputs or {})
for key, value in inputs.items():
try:
if value.type == ValueType.IMAGE:
if isinstance(updated_inputs[key], list):
# For aggregation node, the image input is a list.
updated_inputs[key] = [create_image(item) for item in updated_inputs[key]]
else:
updated_inputs[key] = create_image(updated_inputs[key])
elif value.type == ValueType.LIST or value.type == ValueType.OBJECT:
updated_inputs[key] = load_multimedia_data_recursively(updated_inputs[key])
except Exception as ex:
error_type_and_message = f"({ex.__class__.__name__}) {ex}"
raise LoadMultimediaDataError(
message_format="Failed to load image for input '{key}': {error_type_and_message}",
key=key,
error_type_and_message=error_type_and_message,
target=ErrorTarget.EXECUTOR,
) from ex
return updated_inputs
def load_multimedia_data_recursively(value: Any):
return _process_multimedia_dict_recursively(value, _create_image_from_dict)
def resolve_multimedia_data_recursively(input_dir: Path, value: Any):
process_func = partial(resolve_image_path, **{"input_dir": input_dir})
return _process_multimedia_dict_recursively(value, process_func)
def _process_multimedia_dict_recursively(value: Any, process_func: Callable) -> dict:
if isinstance(value, list):
return [_process_multimedia_dict_recursively(item, process_func) for item in value]
elif isinstance(value, dict):
if is_multimedia_dict(value):
return process_func(**{"image_dict": value})
else:
return {k: _process_multimedia_dict_recursively(v, process_func) for k, v in value.items()}
else:
return value
def resolve_image_path(input_dir: Path, image_dict: dict):
"""Resolve image path to absolute path in image dict"""
input_dir = input_dir.parent if input_dir.is_file() else input_dir
if is_multimedia_dict(image_dict):
for key in image_dict:
_, resource = _get_multimedia_info(key)
if resource == "path":
image_dict[key] = str(input_dir / image_dict[key])
return image_dict
| promptflow/src/promptflow/promptflow/_utils/multimedia_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/multimedia_utils.py",
"repo_id": "promptflow",
"token_count": 4305
} | 20 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from dataclasses import asdict, dataclass
from promptflow.azure._restclient.flow.models import ConnectionConfigSpec as RestConnectionConfigSpec
from promptflow.azure._restclient.flow.models import WorkspaceConnectionSpec as RestWorkspaceConnectionSpec
@dataclass
class ConnectionConfigSpec:
name: str
display_name: str
config_value_type: str
default_value: str = None
description: str = None
enum_values: list = None
is_optional: bool = False
@classmethod
def _from_rest_object(cls, rest_obj: RestConnectionConfigSpec):
return cls(
name=rest_obj.name,
display_name=rest_obj.display_name,
config_value_type=rest_obj.config_value_type,
default_value=rest_obj.default_value,
description=rest_obj.description,
enum_values=rest_obj.enum_values,
is_optional=rest_obj.is_optional,
)
def _to_dict(self):
return asdict(self, dict_factory=lambda x: {k: v for (k, v) in x if v is not None})
@dataclass
class WorkspaceConnectionSpec:
module: str
connection_type: str # Connection type example: AzureOpenAI
flow_value_type: str # Flow value type is the input.type on node, example: AzureOpenAIConnection
config_specs: list = None
@classmethod
def _from_rest_object(cls, rest_obj: RestWorkspaceConnectionSpec):
return cls(
config_specs=[
ConnectionConfigSpec._from_rest_object(config_spec) for config_spec in (rest_obj.config_specs or [])
],
module=rest_obj.module,
connection_type=rest_obj.connection_type,
flow_value_type=rest_obj.flow_value_type,
)
def _to_dict(self):
return asdict(self, dict_factory=lambda x: {k: v for (k, v) in x if v is not None})
| promptflow/src/promptflow/promptflow/azure/_entities/_workspace_connection_spec.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_entities/_workspace_connection_spec.py",
"repo_id": "promptflow",
"token_count": 771
} | 21 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator_async import distributed_trace_async
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._bulk_runs_operations import build_cancel_flow_run_request, build_get_flow_child_runs_request, build_get_flow_node_run_base_path_request, build_get_flow_node_runs_request, build_get_flow_run_info_request, build_get_flow_run_log_content_request, build_submit_bulk_run_request
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class BulkRunsOperations:
"""BulkRunsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace_async
async def submit_bulk_run(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
body: Optional["_models.SubmitBulkRunRequest"] = None,
**kwargs: Any
) -> str:
"""submit_bulk_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param body:
:type body: ~flow.models.SubmitBulkRunRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'SubmitBulkRunRequest')
else:
_json = None
request = build_submit_bulk_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self.submit_bulk_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('str', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('str', pipeline_response)
if response.status_code == 204:
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_bulk_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/submit'} # type: ignore
@distributed_trace_async
async def cancel_flow_run(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
**kwargs: Any
) -> str:
"""cancel_flow_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_cancel_flow_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
template_url=self.cancel_flow_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
cancel_flow_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/cancel'} # type: ignore
@distributed_trace_async
async def get_flow_run_info(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
**kwargs: Any
) -> "_models.FlowRunInfo":
"""get_flow_run_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunInfo, or the result of cls(response)
:rtype: ~flow.models.FlowRunInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_run_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
template_url=self.get_flow_run_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_run_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}'} # type: ignore
@distributed_trace_async
async def get_flow_child_runs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
index: Optional[int] = None,
start_index: Optional[int] = None,
end_index: Optional[int] = None,
**kwargs: Any
) -> List[Any]:
"""get_flow_child_runs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:param index:
:type index: int
:param start_index:
:type start_index: int
:param end_index:
:type end_index: int
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of any, or the result of cls(response)
:rtype: list[any]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[Any]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_child_runs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
index=index,
start_index=start_index,
end_index=end_index,
template_url=self.get_flow_child_runs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[object]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_child_runs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/childRuns'} # type: ignore
@distributed_trace_async
async def get_flow_node_runs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
node_name: str,
index: Optional[int] = None,
start_index: Optional[int] = None,
end_index: Optional[int] = None,
aggregation: Optional[bool] = False,
**kwargs: Any
) -> List[Any]:
"""get_flow_node_runs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:param node_name:
:type node_name: str
:param index:
:type index: int
:param start_index:
:type start_index: int
:param end_index:
:type end_index: int
:param aggregation:
:type aggregation: bool
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of any, or the result of cls(response)
:rtype: list[any]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[Any]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_node_runs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
node_name=node_name,
index=index,
start_index=start_index,
end_index=end_index,
aggregation=aggregation,
template_url=self.get_flow_node_runs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[object]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_node_runs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/nodeRuns/{nodeName}'} # type: ignore
@distributed_trace_async
async def get_flow_node_run_base_path(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
node_name: str,
**kwargs: Any
) -> "_models.FlowRunBasePath":
"""get_flow_node_run_base_path.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:param node_name:
:type node_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunBasePath, or the result of cls(response)
:rtype: ~flow.models.FlowRunBasePath
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunBasePath"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_node_run_base_path_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
node_name=node_name,
template_url=self.get_flow_node_run_base_path.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunBasePath', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_node_run_base_path.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/nodeRuns/{nodeName}/basePath'} # type: ignore
@distributed_trace_async
async def get_flow_run_log_content(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_run_id: str,
**kwargs: Any
) -> str:
"""get_flow_run_log_content.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_run_log_content_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
template_url=self.get_flow_run_log_content.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_run_log_content.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/logContent'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_bulk_runs_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_bulk_runs_operations.py",
"repo_id": "promptflow",
"token_count": 8658
} | 22 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from ._bulk_runs_operations import BulkRunsOperations
from ._connection_operations import ConnectionOperations
from ._connections_operations import ConnectionsOperations
from ._flow_runtimes_operations import FlowRuntimesOperations
from ._flow_runtimes_workspace_independent_operations import FlowRuntimesWorkspaceIndependentOperations
from ._flows_operations import FlowsOperations
from ._flow_sessions_operations import FlowSessionsOperations
from ._flows_provider_operations import FlowsProviderOperations
from ._tools_operations import ToolsOperations
from ._trace_sessions_operations import TraceSessionsOperations
__all__ = [
'BulkRunsOperations',
'ConnectionOperations',
'ConnectionsOperations',
'FlowRuntimesOperations',
'FlowRuntimesWorkspaceIndependentOperations',
'FlowsOperations',
'FlowSessionsOperations',
'FlowsProviderOperations',
'ToolsOperations',
'TraceSessionsOperations',
]
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/__init__.py",
"repo_id": "promptflow",
"token_count": 346
} | 23 |
{
"openapi": "3.0.1",
"info": {
"title": "Azure Machine Learning Designer Service Client",
"version": "1.0.0"
},
"paths": {
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/submit": {
"post": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_SubmitBulkRun",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SubmitBulkRunRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"202": {
"description": "Accepted",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"204": {
"description": "No Content",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/cancel": {
"post": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_CancelFlowRun",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}": {
"get": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_GetFlowRunInfo",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunInfo"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/childRuns": {
"get": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_GetFlowChildRuns",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "index",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "startIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "endIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { }
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/nodeRuns/{nodeName}": {
"get": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_GetFlowNodeRuns",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "nodeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "index",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "startIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "endIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "aggregation",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { }
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/nodeRuns/{nodeName}/basePath": {
"get": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_GetFlowNodeRunBasePath",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "nodeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunBasePath"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/BulkRuns/{flowRunId}/logContent": {
"get": {
"tags": [
"BulkRuns"
],
"operationId": "BulkRuns_GetFlowRunLogContent",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connection/{connectionName}": {
"post": {
"tags": [
"Connection"
],
"operationId": "Connection_CreateConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateOrUpdateConnectionRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionEntity"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"put": {
"tags": [
"Connection"
],
"operationId": "Connection_UpdateConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateOrUpdateConnectionRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionEntity"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"Connection"
],
"operationId": "Connection_GetConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionEntity"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"delete": {
"tags": [
"Connection"
],
"operationId": "Connection_DeleteConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "connectionScope",
"in": "query",
"schema": {
"$ref": "#/components/schemas/ConnectionScope"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionEntity"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connection": {
"get": {
"tags": [
"Connection"
],
"operationId": "Connection_ListConnections",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionEntity"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connection/specs": {
"get": {
"tags": [
"Connection"
],
"operationId": "Connection_ListConnectionSpecs",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionSpec"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}": {
"post": {
"tags": [
"Connections"
],
"operationId": "Connections_CreateConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateOrUpdateConnectionRequestDto"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"put": {
"tags": [
"Connections"
],
"operationId": "Connections_UpdateConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateOrUpdateConnectionRequestDto"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"Connections"
],
"operationId": "Connections_GetConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"delete": {
"tags": [
"Connections"
],
"operationId": "Connections_DeleteConnection",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/listsecrets": {
"get": {
"tags": [
"Connections"
],
"operationId": "Connections_GetConnectionWithSecrets",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections": {
"get": {
"tags": [
"Connections"
],
"operationId": "Connections_ListConnections",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/specs": {
"get": {
"tags": [
"Connections"
],
"operationId": "Connections_ListConnectionSpecs",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WorkspaceConnectionSpec"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/AzureOpenAIDeployments": {
"get": {
"tags": [
"Connections"
],
"operationId": "Connections_ListAzureOpenAIDeployments",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "connectionName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AzureOpenAIDeploymentDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/{runtimeName}": {
"post": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_CreateRuntime",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "asyncCall",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "msiToken",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "skipPortCheck",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateFlowRuntimeRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRuntimeDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"put": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_UpdateRuntime",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "asyncCall",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "msiToken",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "skipPortCheck",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UpdateFlowRuntimeRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRuntimeDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_GetRuntime",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRuntimeDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"delete": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_DeleteRuntime",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "asyncCall",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "msiToken",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRuntimeDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/checkCiAvailability": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_CheckCiAvailability",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "computeInstanceName",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "customAppName",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AvailabilityResponse"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/checkMirAvailability": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_CheckMirAvailability",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "endpointName",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "deploymentName",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AvailabilityResponse"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/{runtimeName}/needUpgrade": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_CheckRuntimeUpgrade",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "boolean"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/{runtimeName}/capability": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_GetRuntimeCapability",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRuntimeCapability"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes/latestConfig": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_GetRuntimeLatestConfig",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RuntimeConfiguration"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRuntimes": {
"get": {
"tags": [
"FlowRuntimes"
],
"operationId": "FlowRuntimes_ListRuntimes",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowRuntimeDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/runtimes/latestConfig": {
"get": {
"tags": [
"FlowRuntimesWorkspaceIndependent"
],
"operationId": "FlowRuntimesWorkspaceIndependent_GetRuntimeLatestConfig",
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RuntimeConfiguration"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_CreateFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "experimentId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_ListFlows",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "experimentId",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "ownedOnly",
"in": "query",
"schema": {
"type": "boolean"
}
},
{
"name": "flowType",
"in": "query",
"schema": {
"$ref": "#/components/schemas/FlowType"
}
},
{
"name": "listViewType",
"in": "query",
"schema": {
"$ref": "#/components/schemas/ListViewType"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowBaseDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}": {
"put": {
"tags": [
"Flows"
],
"operationId": "Flows_UpdateFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UpdateFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"patch": {
"tags": [
"Flows"
],
"operationId": "Flows_PatchFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json-patch+json": {
"schema": {
"$ref": "#/components/schemas/PatchFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/submit": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_SubmitFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "endpointName",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SubmitFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/{flowRunId}/status": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowRunStatus",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowRunInfo",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunInfo"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/childRuns": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowChildRuns",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "index",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "startIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "endIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { }
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowNodeRuns",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "nodeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "index",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "startIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "endIndex",
"in": "query",
"schema": {
"type": "integer",
"format": "int32"
}
},
{
"name": "aggregation",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { }
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}/basePath": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowNodeRunBasePath",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "nodeName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowRunBasePath"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_ListBulkTests",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/BulkTestDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests/{bulkTestId}": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetBulkTest",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "bulkTestId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BulkTestDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/samples": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetSamples",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "useSnapshot",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowSampleDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/evaluateSamples": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetEvaluateFlowSamples",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "useSnapshot",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowSampleDto"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/DeployReservedEnvironmentVariableNames": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowDeployReservedEnvironmentVariableNames",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/deploy": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_DeployFlow",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "asyncCall",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
},
{
"name": "msiToken",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/DeployFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/logContent": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowRunLogContent",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/runs/{flowRunId}/cancel": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_CancelFlowRun",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTests/{flowRunId}/cancel": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_CancelFlowTest",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/bulkTests/{bulkTestRunId}/cancel": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_CancelBulkTestRun",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "bulkTestRunId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/FlowSnapshot": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowSnapshot",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateFlowRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowSnapshot"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/connectionOverride": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_GetConnectionOverrideSettings",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "runtimeName",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowGraphReference"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/flowInputs": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowInputs",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowGraphReference"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowInputDefinition"
},
"description": "This is a dictionary"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/LoadAsComponent": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_LoadAsComponent",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/LoadFlowAsComponentRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTools": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowTools",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowToolsDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions": {
"post": {
"tags": [
"Flows"
],
"operationId": "Flows_SetupFlowSession",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SetupFlowSessionRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"202": {
"description": "Accepted",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"delete": {
"tags": [
"Flows"
],
"operationId": "Flows_DeleteFlowSession",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"202": {
"description": "Accepted",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions/status": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_GetFlowSessionStatus",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/FlowSessionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions/pipPackages": {
"get": {
"tags": [
"Flows"
],
"operationId": "Flows_ListFlowSessionPipPackages",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "experimentId",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}": {
"post": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_CreateFlowSession",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "sessionId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CreateFlowSessionRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"202": {
"description": "Accepted",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"get": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_GetFlowSession",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "sessionId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/GetTrainingSessionDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
},
"delete": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_DeleteFlowSession",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "sessionId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"202": {
"description": "Accepted",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/pipPackages": {
"get": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_ListFlowSessionPipPackages",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "sessionId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}": {
"get": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_PollOperationStatus",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "sessionId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "actionType",
"in": "path",
"required": true,
"schema": {
"$ref": "#/components/schemas/SetupFlowSessionAction"
}
},
{
"name": "location",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "operationId",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "api-version",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "type",
"in": "query",
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools": {
"get": {
"tags": [
"FlowSessions"
],
"operationId": "FlowSessions_GetStandbyPools",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StandbyPoolProperties"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/v1.0/flows/getIndexEntities": {
"post": {
"tags": [
"FlowsProvider"
],
"operationId": "FlowsProvider_GetIndexEntityById",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UnversionedEntityRequestDto"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UnversionedEntityResponseDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/v1.0/flows/rebuildIndex": {
"post": {
"tags": [
"FlowsProvider"
],
"operationId": "FlowsProvider_GetUpdatedEntityIdsForWorkspace",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UnversionedRebuildIndexDto"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/UnversionedRebuildResponseDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/setting": {
"get": {
"tags": [
"Tools"
],
"operationId": "Tools_GetToolSetting",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ToolSetting"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/samples": {
"get": {
"tags": [
"Tools"
],
"operationId": "Tools_GetSamples",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Tool"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/meta": {
"post": {
"tags": [
"Tools"
],
"operationId": "Tools_GetToolMeta",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "toolName",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "toolType",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "endpointName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/meta-v2": {
"post": {
"tags": [
"Tools"
],
"operationId": "Tools_GetToolMetaV2",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/GenerateToolMetaRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ToolMetaDto"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/packageTools": {
"get": {
"tags": [
"Tools"
],
"operationId": "Tools_GetPackageTools",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Tool"
}
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/dynamicList": {
"post": {
"tags": [
"Tools"
],
"operationId": "Tools_GetDynamicList",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/GetDynamicListRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { }
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Tools/RetrieveToolFuncResult": {
"post": {
"tags": [
"Tools"
],
"operationId": "Tools_RetrieveToolFuncResult",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "flowRuntimeName",
"in": "query",
"schema": {
"type": "string"
}
},
{
"name": "flowId",
"in": "query",
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/RetrieveToolFuncResultRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ToolFuncResponse"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/TraceSessions/attachDb": {
"post": {
"tags": [
"TraceSessions"
],
"operationId": "TraceSessions_AttachCosmosAccount",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "overwrite",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AttachCosmosRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/IActionResult"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/TraceSessions/container/{containerName}/resourceToken": {
"get": {
"tags": [
"TraceSessions"
],
"operationId": "TraceSessions_GetCosmosResourceToken",
"parameters": [
{
"$ref": "#/components/parameters/subscriptionIdParameter"
},
{
"$ref": "#/components/parameters/resourceGroupNameParameter"
},
{
"$ref": "#/components/parameters/workspaceNameParameter"
},
{
"name": "containerName",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "acquireWrite",
"in": "query",
"schema": {
"type": "boolean",
"default": false
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "string"
}
}
}
},
"default": {
"description": "Error response describing why the operation failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"ACIAdvanceSettings": {
"type": "object",
"properties": {
"containerResourceRequirements": {
"$ref": "#/components/schemas/ContainerResourceRequirements"
},
"appInsightsEnabled": {
"type": "boolean",
"nullable": true
},
"sslEnabled": {
"type": "boolean",
"nullable": true
},
"sslCertificate": {
"type": "string",
"nullable": true
},
"sslKey": {
"type": "string",
"nullable": true
},
"cName": {
"type": "string",
"nullable": true
},
"dnsNameLabel": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AEVAAssetType": {
"enum": [
"UriFile",
"UriFolder",
"MLTable",
"CustomModel",
"MLFlowModel",
"TritonModel",
"OpenAIModel"
],
"type": "string"
},
"AEVAComputeConfiguration": {
"type": "object",
"properties": {
"target": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isLocal": {
"type": "boolean"
},
"location": {
"type": "string",
"nullable": true
},
"isClusterless": {
"type": "boolean"
},
"instanceType": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"isPreemptable": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AEVADataStoreMode": {
"enum": [
"None",
"Mount",
"Download",
"Upload",
"Direct",
"Hdfs",
"Link"
],
"type": "string"
},
"AEVAIdentityType": {
"enum": [
"UserIdentity",
"Managed",
"AMLToken"
],
"type": "string"
},
"AEVAResourceConfiguration": {
"type": "object",
"properties": {
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"instanceType": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"locations": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"instancePriority": {
"type": "string",
"nullable": true
},
"quotaEnforcementResourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AISuperComputerConfiguration": {
"type": "object",
"properties": {
"instanceType": {
"type": "string",
"nullable": true
},
"instanceTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"imageVersion": {
"type": "string",
"nullable": true
},
"location": {
"type": "string",
"nullable": true
},
"locations": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"aiSuperComputerStorageData": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AISuperComputerStorageReferenceConfiguration"
},
"nullable": true
},
"interactive": {
"type": "boolean"
},
"scalePolicy": {
"$ref": "#/components/schemas/AISuperComputerScalePolicy"
},
"virtualClusterArmId": {
"type": "string",
"nullable": true
},
"tensorboardLogDirectory": {
"type": "string",
"nullable": true
},
"sshPublicKey": {
"type": "string",
"nullable": true
},
"sshPublicKeys": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enableAzmlInt": {
"type": "boolean"
},
"priority": {
"type": "string",
"nullable": true
},
"slaTier": {
"type": "string",
"nullable": true
},
"suspendOnIdleTimeHours": {
"type": "integer",
"format": "int64",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"quotaEnforcementResourceId": {
"type": "string",
"nullable": true
},
"modelComputeSpecificationId": {
"type": "string",
"nullable": true
},
"groupPolicyName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AISuperComputerScalePolicy": {
"type": "object",
"properties": {
"autoScaleInstanceTypeCountSet": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"autoScaleIntervalInSec": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxInstanceTypeCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"minInstanceTypeCount": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AISuperComputerStorageReferenceConfiguration": {
"type": "object",
"properties": {
"containerName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AKSAdvanceSettings": {
"type": "object",
"properties": {
"autoScaler": {
"$ref": "#/components/schemas/AutoScaler"
},
"containerResourceRequirements": {
"$ref": "#/components/schemas/ContainerResourceRequirements"
},
"appInsightsEnabled": {
"type": "boolean",
"nullable": true
},
"scoringTimeoutMs": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AKSReplicaStatus": {
"type": "object",
"properties": {
"desiredReplicas": {
"type": "integer",
"format": "int32"
},
"updatedReplicas": {
"type": "integer",
"format": "int32"
},
"availableReplicas": {
"type": "integer",
"format": "int32"
},
"error": {
"$ref": "#/components/schemas/ModelManagementErrorResponse"
}
},
"additionalProperties": false
},
"AMLComputeConfiguration": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"vmPriority": {
"$ref": "#/components/schemas/VmPriority"
},
"retainCluster": {
"type": "boolean"
},
"clusterMaxNodeCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"virtualMachineImage": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"APCloudConfiguration": {
"type": "object",
"properties": {
"referencedAPModuleGuid": {
"type": "string",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"aetherModuleType": {
"type": "string",
"nullable": true
},
"allowOverwrite": {
"type": "boolean",
"nullable": true
},
"destinationExpirationDays": {
"type": "integer",
"format": "int32",
"nullable": true
},
"shouldRespectLineBoundaries": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"ActionType": {
"enum": [
"SendValidationRequest",
"GetValidationStatus",
"SubmitBulkRun",
"LogRunResult",
"LogRunTerminatedEvent",
"SubmitFlowRun"
],
"type": "string"
},
"Activate": {
"type": "object",
"properties": {
"when": {
"type": "string",
"nullable": true
},
"is": {
"nullable": true
}
},
"additionalProperties": false
},
"AdditionalErrorInfo": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"info": {
"nullable": true
}
},
"additionalProperties": false
},
"AdhocTriggerScheduledCommandJobRequest": {
"type": "object",
"properties": {
"jobName": {
"type": "string",
"nullable": true
},
"jobDisplayName": {
"type": "string",
"nullable": true
},
"triggerTimeString": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AdhocTriggerScheduledSparkJobRequest": {
"type": "object",
"properties": {
"jobName": {
"type": "string",
"nullable": true
},
"jobDisplayName": {
"type": "string",
"nullable": true
},
"triggerTimeString": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAPCloudConfiguration": {
"type": "object",
"properties": {
"referencedAPModuleGuid": {
"type": "string",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"aetherModuleType": {
"type": "string",
"nullable": true
},
"allowOverwrite": {
"type": "boolean",
"nullable": true
},
"destinationExpirationDays": {
"type": "integer",
"format": "int32",
"nullable": true
},
"shouldRespectLineBoundaries": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAmlDataset": {
"type": "object",
"properties": {
"registeredDataSetReference": {
"$ref": "#/components/schemas/AetherRegisteredDataSetReference"
},
"savedDataSetReference": {
"$ref": "#/components/schemas/AetherSavedDataSetReference"
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAmlSparkCloudSetting": {
"type": "object",
"properties": {
"entry": {
"$ref": "#/components/schemas/AetherEntrySetting"
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"driverMemory": {
"type": "string",
"nullable": true
},
"driverCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"executorMemory": {
"type": "string",
"nullable": true
},
"executorCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numberExecutors": {
"type": "integer",
"format": "int32",
"nullable": true
},
"environmentAssetId": {
"type": "string",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"inlineEnvironmentDefinitionString": {
"type": "string",
"nullable": true
},
"conf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"compute": {
"type": "string",
"nullable": true
},
"resources": {
"$ref": "#/components/schemas/AetherResourcesSetting"
},
"identity": {
"$ref": "#/components/schemas/AetherIdentitySetting"
}
},
"additionalProperties": false
},
"AetherArgumentAssignment": {
"type": "object",
"properties": {
"valueType": {
"$ref": "#/components/schemas/AetherArgumentValueType"
},
"value": {
"type": "string",
"nullable": true
},
"nestedArgumentList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"nullable": true
},
"stringInterpolationArgumentList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherArgumentValueType": {
"enum": [
"Literal",
"Parameter",
"Input",
"Output",
"NestedList",
"StringInterpolationList"
],
"type": "string"
},
"AetherAssetDefinition": {
"type": "object",
"properties": {
"path": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/AetherAssetType"
},
"assetId": {
"type": "string",
"nullable": true
},
"initialAssetId": {
"type": "string",
"nullable": true
},
"serializedAssetId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAssetOutputSettings": {
"type": "object",
"properties": {
"path": {
"type": "string",
"nullable": true
},
"PathParameterAssignment": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"type": {
"$ref": "#/components/schemas/AetherAssetType"
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAssetType": {
"enum": [
"UriFile",
"UriFolder",
"MLTable",
"CustomModel",
"MLFlowModel",
"TritonModel",
"OpenAIModel"
],
"type": "string"
},
"AetherAutoFeaturizeConfiguration": {
"type": "object",
"properties": {
"featurizationConfig": {
"$ref": "#/components/schemas/AetherFeaturizationSettings"
}
},
"additionalProperties": false
},
"AetherAutoMLComponentConfiguration": {
"type": "object",
"properties": {
"autoTrainConfig": {
"$ref": "#/components/schemas/AetherAutoTrainConfiguration"
},
"autoFeaturizeConfig": {
"$ref": "#/components/schemas/AetherAutoFeaturizeConfiguration"
}
},
"additionalProperties": false
},
"AetherAutoTrainConfiguration": {
"type": "object",
"properties": {
"generalSettings": {
"$ref": "#/components/schemas/AetherGeneralSettings"
},
"limitSettings": {
"$ref": "#/components/schemas/AetherLimitSettings"
},
"dataSettings": {
"$ref": "#/components/schemas/AetherDataSettings"
},
"forecastingSettings": {
"$ref": "#/components/schemas/AetherForecastingSettings"
},
"trainingSettings": {
"$ref": "#/components/schemas/AetherTrainingSettings"
},
"sweepSettings": {
"$ref": "#/components/schemas/AetherSweepSettings"
},
"imageModelSettings": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"computeConfiguration": {
"$ref": "#/components/schemas/AetherComputeConfiguration"
},
"resourceConfigurtion": {
"$ref": "#/components/schemas/AetherResourceConfiguration"
},
"environmentId": {
"type": "string",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherAzureBlobReference": {
"type": "object",
"properties": {
"container": {
"type": "string",
"nullable": true
},
"sasToken": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"pathType": {
"$ref": "#/components/schemas/AetherFileBasedPathType"
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAzureDataLakeGen2Reference": {
"type": "object",
"properties": {
"fileSystemName": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"pathType": {
"$ref": "#/components/schemas/AetherFileBasedPathType"
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAzureDataLakeReference": {
"type": "object",
"properties": {
"tenant": {
"type": "string",
"nullable": true
},
"subscription": {
"type": "string",
"nullable": true
},
"resourceGroup": {
"type": "string",
"nullable": true
},
"dataLakeUri": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"pathType": {
"$ref": "#/components/schemas/AetherFileBasedPathType"
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAzureDatabaseReference": {
"type": "object",
"properties": {
"serverUri": {
"type": "string",
"nullable": true
},
"databaseName": {
"type": "string",
"nullable": true
},
"tableName": {
"type": "string",
"nullable": true
},
"sqlQuery": {
"type": "string",
"nullable": true
},
"storedProcedureName": {
"type": "string",
"nullable": true
},
"storedProcedureParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStoredProcedureParameter"
},
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherAzureFilesReference": {
"type": "object",
"properties": {
"share": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"pathType": {
"$ref": "#/components/schemas/AetherFileBasedPathType"
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherBatchAiComputeInfo": {
"type": "object",
"properties": {
"batchAiSubscriptionId": {
"type": "string",
"nullable": true
},
"batchAiResourceGroup": {
"type": "string",
"nullable": true
},
"batchAiWorkspaceName": {
"type": "string",
"nullable": true
},
"clusterName": {
"type": "string",
"nullable": true
},
"nativeSharedDirectory": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherBuildArtifactInfo": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AetherBuildSourceType"
},
"cloudBuildDropPathInfo": {
"$ref": "#/components/schemas/AetherCloudBuildDropPathInfo"
},
"vsoBuildArtifactInfo": {
"$ref": "#/components/schemas/AetherVsoBuildArtifactInfo"
}
},
"additionalProperties": false
},
"AetherBuildSourceType": {
"enum": [
"CloudBuild",
"Vso",
"VsoGit"
],
"type": "string"
},
"AetherCloudBuildDropPathInfo": {
"type": "object",
"properties": {
"buildInfo": {
"$ref": "#/components/schemas/AetherCloudBuildInfo"
},
"root": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherCloudBuildInfo": {
"type": "object",
"properties": {
"queueInfo": {
"$ref": "#/components/schemas/AetherCloudBuildQueueInfo"
},
"buildId": {
"type": "string",
"nullable": true
},
"dropUrl": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherCloudBuildQueueInfo": {
"type": "object",
"properties": {
"buildQueue": {
"type": "string",
"nullable": true
},
"buildRole": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherCloudPrioritySetting": {
"type": "object",
"properties": {
"scopePriority": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"AmlComputePriority": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"ItpPriority": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"SingularityPriority": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
}
},
"additionalProperties": false
},
"AetherCloudSettings": {
"type": "object",
"properties": {
"linkedSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"priorityConfig": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"hdiRunConfig": {
"$ref": "#/components/schemas/AetherHdiRunConfiguration"
},
"subGraphConfig": {
"$ref": "#/components/schemas/AetherSubGraphConfiguration"
},
"autoMLComponentConfig": {
"$ref": "#/components/schemas/AetherAutoMLComponentConfiguration"
},
"apCloudConfig": {
"$ref": "#/components/schemas/AetherAPCloudConfiguration"
},
"scopeCloudConfig": {
"$ref": "#/components/schemas/AetherScopeCloudConfiguration"
},
"esCloudConfig": {
"$ref": "#/components/schemas/AetherEsCloudConfiguration"
},
"dataTransferCloudConfig": {
"$ref": "#/components/schemas/AetherDataTransferCloudConfiguration"
},
"amlSparkCloudSetting": {
"$ref": "#/components/schemas/AetherAmlSparkCloudSetting"
},
"dataTransferV2CloudSetting": {
"$ref": "#/components/schemas/AetherDataTransferV2CloudSetting"
}
},
"additionalProperties": false
},
"AetherColumnTransformer": {
"type": "object",
"properties": {
"fields": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"parameters": {
"nullable": true
}
},
"additionalProperties": false
},
"AetherComputeConfiguration": {
"type": "object",
"properties": {
"target": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isLocal": {
"type": "boolean"
},
"location": {
"type": "string",
"nullable": true
},
"isClusterless": {
"type": "boolean"
},
"instanceType": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"isPreemptable": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherComputeSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"computeType": {
"$ref": "#/components/schemas/AetherComputeType"
},
"batchAiComputeInfo": {
"$ref": "#/components/schemas/AetherBatchAiComputeInfo"
},
"remoteDockerComputeInfo": {
"$ref": "#/components/schemas/AetherRemoteDockerComputeInfo"
},
"hdiClusterComputeInfo": {
"$ref": "#/components/schemas/AetherHdiClusterComputeInfo"
},
"mlcComputeInfo": {
"$ref": "#/components/schemas/AetherMlcComputeInfo"
},
"databricksComputeInfo": {
"$ref": "#/components/schemas/AetherDatabricksComputeInfo"
}
},
"additionalProperties": false
},
"AetherComputeType": {
"enum": [
"BatchAi",
"MLC",
"HdiCluster",
"RemoteDocker",
"Databricks",
"Aisc"
],
"type": "string"
},
"AetherControlFlowType": {
"enum": [
"None",
"DoWhile",
"ParallelFor"
],
"type": "string"
},
"AetherControlInput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"defaultValue": {
"$ref": "#/components/schemas/AetherControlInputValue"
}
},
"additionalProperties": false
},
"AetherControlInputValue": {
"enum": [
"None",
"False",
"True",
"Skipped"
],
"type": "string"
},
"AetherControlOutput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherControlType": {
"enum": [
"IfElse"
],
"type": "string"
},
"AetherCopyDataTask": {
"type": "object",
"properties": {
"DataCopyMode": {
"$ref": "#/components/schemas/AetherDataCopyMode"
}
},
"additionalProperties": false
},
"AetherCosmosReference": {
"type": "object",
"properties": {
"cluster": {
"type": "string",
"nullable": true
},
"vc": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherCreatedBy": {
"type": "object",
"properties": {
"userObjectId": {
"type": "string",
"nullable": true
},
"userTenantId": {
"type": "string",
"nullable": true
},
"userName": {
"type": "string",
"nullable": true
},
"puid": {
"type": "string",
"nullable": true
},
"iss": {
"type": "string",
"nullable": true
},
"idp": {
"type": "string",
"nullable": true
},
"altsecId": {
"type": "string",
"nullable": true
},
"sourceIp": {
"type": "string",
"nullable": true
},
"skipRegistryPrivateLinkCheck": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherCustomReference": {
"type": "object",
"properties": {
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDBFSReference": {
"type": "object",
"properties": {
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDataCopyMode": {
"enum": [
"MergeWithOverwrite",
"FailIfConflict"
],
"type": "string"
},
"AetherDataLocation": {
"type": "object",
"properties": {
"storageType": {
"$ref": "#/components/schemas/AetherDataLocationStorageType"
},
"storageId": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataReference": {
"$ref": "#/components/schemas/AetherDataReference"
},
"amlDataset": {
"$ref": "#/components/schemas/AetherAmlDataset"
},
"assetDefinition": {
"$ref": "#/components/schemas/AetherAssetDefinition"
},
"isCompliant": {
"type": "boolean"
},
"reuseCalculationFields": {
"$ref": "#/components/schemas/AetherDataLocationReuseCalculationFields"
}
},
"additionalProperties": false
},
"AetherDataLocationReuseCalculationFields": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"dataExperimentId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDataLocationStorageType": {
"enum": [
"Cosmos",
"AzureBlob",
"Artifact",
"Snapshot",
"SavedAmlDataset",
"Asset"
],
"type": "string"
},
"AetherDataPath": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"sqlDataPath": {
"$ref": "#/components/schemas/AetherSqlDataPath"
}
},
"additionalProperties": false
},
"AetherDataReference": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AetherDataReferenceType"
},
"azureBlobReference": {
"$ref": "#/components/schemas/AetherAzureBlobReference"
},
"azureDataLakeReference": {
"$ref": "#/components/schemas/AetherAzureDataLakeReference"
},
"azureFilesReference": {
"$ref": "#/components/schemas/AetherAzureFilesReference"
},
"cosmosReference": {
"$ref": "#/components/schemas/AetherCosmosReference"
},
"phillyHdfsReference": {
"$ref": "#/components/schemas/AetherPhillyHdfsReference"
},
"azureSqlDatabaseReference": {
"$ref": "#/components/schemas/AetherAzureDatabaseReference"
},
"azurePostgresDatabaseReference": {
"$ref": "#/components/schemas/AetherAzureDatabaseReference"
},
"azureDataLakeGen2Reference": {
"$ref": "#/components/schemas/AetherAzureDataLakeGen2Reference"
},
"dbfsReference": {
"$ref": "#/components/schemas/AetherDBFSReference"
},
"azureMySqlDatabaseReference": {
"$ref": "#/components/schemas/AetherAzureDatabaseReference"
},
"customReference": {
"$ref": "#/components/schemas/AetherCustomReference"
},
"hdfsReference": {
"$ref": "#/components/schemas/AetherHdfsReference"
}
},
"additionalProperties": false
},
"AetherDataReferenceType": {
"enum": [
"None",
"AzureBlob",
"AzureDataLake",
"AzureFiles",
"Cosmos",
"PhillyHdfs",
"AzureSqlDatabase",
"AzurePostgresDatabase",
"AzureDataLakeGen2",
"DBFS",
"AzureMySqlDatabase",
"Custom",
"Hdfs"
],
"type": "string"
},
"AetherDataSetDefinition": {
"type": "object",
"properties": {
"dataTypeShortName": {
"type": "string",
"nullable": true
},
"parameterName": {
"type": "string",
"nullable": true
},
"value": {
"$ref": "#/components/schemas/AetherDataSetDefinitionValue"
}
},
"additionalProperties": false
},
"AetherDataSetDefinitionValue": {
"type": "object",
"properties": {
"literalValue": {
"$ref": "#/components/schemas/AetherDataPath"
},
"dataSetReference": {
"$ref": "#/components/schemas/AetherRegisteredDataSetReference"
},
"savedDataSetReference": {
"$ref": "#/components/schemas/AetherSavedDataSetReference"
},
"assetDefinition": {
"$ref": "#/components/schemas/AetherAssetDefinition"
}
},
"additionalProperties": false
},
"AetherDataSettings": {
"type": "object",
"properties": {
"targetColumnName": {
"type": "string",
"nullable": true
},
"weightColumnName": {
"type": "string",
"nullable": true
},
"positiveLabel": {
"type": "string",
"nullable": true
},
"validationData": {
"$ref": "#/components/schemas/AetherValidationDataSettings"
},
"testData": {
"$ref": "#/components/schemas/AetherTestDataSettings"
}
},
"additionalProperties": false
},
"AetherDataStoreMode": {
"enum": [
"None",
"Mount",
"Download",
"Upload",
"Direct",
"Hdfs",
"Link"
],
"type": "string"
},
"AetherDataTransferCloudConfiguration": {
"type": "object",
"properties": {
"AllowOverwrite": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDataTransferSink": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AetherDataTransferStorageType"
},
"fileSystem": {
"$ref": "#/components/schemas/AetherFileSystem"
},
"databaseSink": {
"$ref": "#/components/schemas/AetherDatabaseSink"
}
},
"additionalProperties": false
},
"AetherDataTransferSource": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AetherDataTransferStorageType"
},
"fileSystem": {
"$ref": "#/components/schemas/AetherFileSystem"
},
"databaseSource": {
"$ref": "#/components/schemas/AetherDatabaseSource"
}
},
"additionalProperties": false
},
"AetherDataTransferStorageType": {
"enum": [
"DataBase",
"FileSystem"
],
"type": "string"
},
"AetherDataTransferTaskType": {
"enum": [
"ImportData",
"ExportData",
"CopyData"
],
"type": "string"
},
"AetherDataTransferV2CloudSetting": {
"type": "object",
"properties": {
"taskType": {
"$ref": "#/components/schemas/AetherDataTransferTaskType"
},
"ComputeName": {
"type": "string",
"nullable": true
},
"CopyDataTask": {
"$ref": "#/components/schemas/AetherCopyDataTask"
},
"ImportDataTask": {
"$ref": "#/components/schemas/AetherImportDataTask"
},
"ExportDataTask": {
"$ref": "#/components/schemas/AetherExportDataTask"
},
"DataTransferSources": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherDataTransferSource"
},
"description": "This is a dictionary",
"nullable": true
},
"DataTransferSinks": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherDataTransferSink"
},
"description": "This is a dictionary",
"nullable": true
},
"DataCopyMode": {
"$ref": "#/components/schemas/AetherDataCopyMode"
}
},
"additionalProperties": false
},
"AetherDatabaseSink": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"table": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDatabaseSource": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"query": {
"type": "string",
"nullable": true
},
"storedProcedureName": {
"type": "string",
"nullable": true
},
"storedProcedureParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStoredProcedureParameter"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherDatabricksComputeInfo": {
"type": "object",
"properties": {
"existingClusterId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDatasetOutput": {
"type": "object",
"properties": {
"datasetType": {
"$ref": "#/components/schemas/AetherDatasetType"
},
"datasetRegistration": {
"$ref": "#/components/schemas/AetherDatasetRegistration"
},
"datasetOutputOptions": {
"$ref": "#/components/schemas/AetherDatasetOutputOptions"
}
},
"additionalProperties": false
},
"AetherDatasetOutputOptions": {
"type": "object",
"properties": {
"sourceGlobs": {
"$ref": "#/components/schemas/AetherGlobsOptions"
},
"pathOnDatastore": {
"type": "string",
"nullable": true
},
"PathOnDatastoreParameterAssignment": {
"$ref": "#/components/schemas/AetherParameterAssignment"
}
},
"additionalProperties": false
},
"AetherDatasetRegistration": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"createNewVersion": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDatasetType": {
"enum": [
"File",
"Tabular"
],
"type": "string"
},
"AetherDatastoreSetting": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherDoWhileControlFlowInfo": {
"type": "object",
"properties": {
"outputPortNameToInputPortNamesMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"nullable": true
},
"conditionOutputPortName": {
"type": "string",
"nullable": true
},
"runSettings": {
"$ref": "#/components/schemas/AetherDoWhileControlFlowRunSettings"
}
},
"additionalProperties": false
},
"AetherDoWhileControlFlowRunSettings": {
"type": "object",
"properties": {
"maxLoopIterationCount": {
"$ref": "#/components/schemas/AetherParameterAssignment"
}
},
"additionalProperties": false
},
"AetherDockerSettingConfiguration": {
"type": "object",
"properties": {
"useDocker": {
"type": "boolean",
"nullable": true
},
"sharedVolumes": {
"type": "boolean",
"nullable": true
},
"shmSize": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherEarlyTerminationPolicyType": {
"enum": [
"Bandit",
"MedianStopping",
"TruncationSelection"
],
"type": "string"
},
"AetherEntityInterfaceDocumentation": {
"type": "object",
"properties": {
"inputsDocumentation": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"outputsDocumentation": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"parametersDocumentation": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherEntityStatus": {
"enum": [
"Active",
"Deprecated",
"Disabled"
],
"type": "string"
},
"AetherEntrySetting": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"className": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherEnvironmentConfiguration": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"useEnvironmentDefinition": {
"type": "boolean"
},
"environmentDefinitionString": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherEsCloudConfiguration": {
"type": "object",
"properties": {
"enableOutputToFileBasedOnDataTypeId": {
"type": "boolean",
"nullable": true
},
"amlComputePriorityInternal": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"itpPriorityInternal": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"singularityPriorityInternal": {
"$ref": "#/components/schemas/AetherPriorityConfiguration"
},
"environment": {
"$ref": "#/components/schemas/AetherEnvironmentConfiguration"
},
"hyperDriveConfiguration": {
"$ref": "#/components/schemas/AetherHyperDriveConfiguration"
},
"k8sConfig": {
"$ref": "#/components/schemas/AetherK8sConfiguration"
},
"resourceConfig": {
"$ref": "#/components/schemas/AetherResourceConfiguration"
},
"torchDistributedConfig": {
"$ref": "#/components/schemas/AetherTorchDistributedConfiguration"
},
"targetSelectorConfig": {
"$ref": "#/components/schemas/AetherTargetSelectorConfiguration"
},
"dockerConfig": {
"$ref": "#/components/schemas/AetherDockerSettingConfiguration"
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"maxRunDurationSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/AetherIdentitySetting"
},
"applicationEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ApplicationEndpointConfiguration"
},
"nullable": true
},
"runConfig": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherExecutionEnvironment": {
"enum": [
"ExeWorkerMachine",
"DockerContainerWithoutNetwork",
"DockerContainerWithNetwork",
"HyperVWithoutNetwork",
"HyperVWithNetwork"
],
"type": "string"
},
"AetherExecutionPhase": {
"enum": [
"Execution",
"Initialization",
"Finalization"
],
"type": "string"
},
"AetherExportDataTask": {
"type": "object",
"properties": {
"DataTransferSink": {
"$ref": "#/components/schemas/AetherDataTransferSink"
}
},
"additionalProperties": false
},
"AetherFeaturizationMode": {
"enum": [
"Auto",
"Custom",
"Off"
],
"type": "string"
},
"AetherFeaturizationSettings": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherFeaturizationMode"
},
"blockedTransformers": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"columnPurposes": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"dropColumns": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"transformerParams": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherColumnTransformer"
},
"nullable": true
},
"nullable": true
},
"datasetLanguage": {
"type": "string",
"nullable": true
},
"enableDnnFeaturization": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"AetherFileBasedPathType": {
"enum": [
"Unknown",
"File",
"Folder"
],
"type": "string"
},
"AetherFileSystem": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherForecastHorizon": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherForecastHorizonMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AetherForecastHorizonMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"AetherForecastingSettings": {
"type": "object",
"properties": {
"countryOrRegionForHolidays": {
"type": "string",
"nullable": true
},
"timeColumnName": {
"type": "string",
"nullable": true
},
"targetLags": {
"$ref": "#/components/schemas/AetherTargetLags"
},
"targetRollingWindowSize": {
"$ref": "#/components/schemas/AetherTargetRollingWindowSize"
},
"forecastHorizon": {
"$ref": "#/components/schemas/AetherForecastHorizon"
},
"timeSeriesIdColumnNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"frequency": {
"type": "string",
"nullable": true
},
"featureLags": {
"type": "string",
"nullable": true
},
"seasonality": {
"$ref": "#/components/schemas/AetherSeasonality"
},
"shortSeriesHandlingConfig": {
"$ref": "#/components/schemas/AetherShortSeriesHandlingConfiguration"
},
"useStl": {
"$ref": "#/components/schemas/AetherUseStl"
},
"targetAggregateFunction": {
"$ref": "#/components/schemas/AetherTargetAggregationFunction"
},
"cvStepSize": {
"type": "integer",
"format": "int32",
"nullable": true
},
"featuresUnknownAtForecastTime": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherGeneralSettings": {
"type": "object",
"properties": {
"primaryMetric": {
"$ref": "#/components/schemas/AetherPrimaryMetrics"
},
"taskType": {
"$ref": "#/components/schemas/AetherTaskType"
},
"logVerbosity": {
"$ref": "#/components/schemas/AetherLogVerbosity"
}
},
"additionalProperties": false
},
"AetherGlobsOptions": {
"type": "object",
"properties": {
"globPatterns": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherGraphControlNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"controlType": {
"$ref": "#/components/schemas/AetherControlType"
},
"controlParameter": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherGraphControlReferenceNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"controlFlowType": {
"$ref": "#/components/schemas/AetherControlFlowType"
},
"referenceNodeId": {
"type": "string",
"nullable": true
},
"doWhileControlFlowInfo": {
"$ref": "#/components/schemas/AetherDoWhileControlFlowInfo"
},
"parallelForControlFlowInfo": {
"$ref": "#/components/schemas/AetherParallelForControlFlowInfo"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherGraphDatasetNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"datasetId": {
"type": "string",
"nullable": true
},
"dataPathParameterName": {
"type": "string",
"nullable": true
},
"dataSetDefinition": {
"$ref": "#/components/schemas/AetherDataSetDefinition"
}
},
"additionalProperties": false
},
"AetherGraphEdge": {
"type": "object",
"properties": {
"sourceOutputPort": {
"$ref": "#/components/schemas/AetherPortInfo"
},
"destinationInputPort": {
"$ref": "#/components/schemas/AetherPortInfo"
}
},
"additionalProperties": false
},
"AetherGraphEntity": {
"type": "object",
"properties": {
"moduleNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphModuleNode"
},
"nullable": true
},
"datasetNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphDatasetNode"
},
"nullable": true
},
"subGraphNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphReferenceNode"
},
"nullable": true
},
"controlReferenceNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphControlReferenceNode"
},
"nullable": true
},
"controlNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphControlNode"
},
"nullable": true
},
"edges": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherGraphEdge"
},
"nullable": true
},
"defaultCompute": {
"$ref": "#/components/schemas/AetherComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/AetherDatastoreSetting"
},
"defaultCloudPriority": {
"$ref": "#/components/schemas/AetherCloudPrioritySetting"
},
"parentSubGraphModuleIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"workspaceId": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"entityStatus": {
"$ref": "#/components/schemas/AetherEntityStatus"
}
},
"additionalProperties": false
},
"AetherGraphModuleNode": {
"type": "object",
"properties": {
"cloudPriority": {
"type": "integer",
"format": "int32"
},
"defaultDataRetentionHint": {
"type": "integer",
"format": "int32",
"nullable": true
},
"complianceCluster": {
"type": "string",
"nullable": true
},
"euclidWorkspaceId": {
"type": "string",
"nullable": true
},
"attachedModules": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"acceptableMachineClusters": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"customDataLocationId": {
"type": "string",
"nullable": true
},
"alertTimeoutDuration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"runconfig": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"moduleParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"moduleMetadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"moduleOutputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherOutputSetting"
},
"nullable": true
},
"moduleInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherInputSetting"
},
"nullable": true
},
"useGraphDefaultCompute": {
"type": "boolean"
},
"useGraphDefaultDatastore": {
"type": "boolean"
},
"regenerateOutput": {
"type": "boolean"
},
"controlInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherControlInput"
},
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/AetherCloudSettings"
},
"executionPhase": {
"$ref": "#/components/schemas/AetherExecutionPhase"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherGraphReferenceNode": {
"type": "object",
"properties": {
"graphId": {
"type": "string",
"nullable": true
},
"defaultCompute": {
"$ref": "#/components/schemas/AetherComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/AetherDatastoreSetting"
},
"id": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"moduleParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"moduleMetadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"moduleOutputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherOutputSetting"
},
"nullable": true
},
"moduleInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherInputSetting"
},
"nullable": true
},
"useGraphDefaultCompute": {
"type": "boolean"
},
"useGraphDefaultDatastore": {
"type": "boolean"
},
"regenerateOutput": {
"type": "boolean"
},
"controlInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherControlInput"
},
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/AetherCloudSettings"
},
"executionPhase": {
"$ref": "#/components/schemas/AetherExecutionPhase"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherHdfsReference": {
"type": "object",
"properties": {
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherHdiClusterComputeInfo": {
"type": "object",
"properties": {
"address": {
"type": "string",
"nullable": true
},
"username": {
"type": "string",
"nullable": true
},
"password": {
"type": "string",
"nullable": true
},
"privateKey": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherHdiRunConfiguration": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"className": {
"type": "string",
"nullable": true
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
},
"queue": {
"type": "string",
"nullable": true
},
"driverMemory": {
"type": "string",
"nullable": true
},
"driverCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"executorMemory": {
"type": "string",
"nullable": true
},
"executorCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numberExecutors": {
"type": "integer",
"format": "int32",
"nullable": true
},
"conf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherHyperDriveConfiguration": {
"type": "object",
"properties": {
"hyperDriveRunConfig": {
"type": "string",
"nullable": true
},
"primaryMetricGoal": {
"type": "string",
"nullable": true
},
"primaryMetricName": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherIdentitySetting": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AetherIdentityType"
},
"clientId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"objectId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"msiResourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherIdentityType": {
"enum": [
"UserIdentity",
"Managed",
"AMLToken"
],
"type": "string"
},
"AetherImportDataTask": {
"type": "object",
"properties": {
"DataTransferSource": {
"$ref": "#/components/schemas/AetherDataTransferSource"
}
},
"additionalProperties": false
},
"AetherInputSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherInteractiveConfig": {
"type": "object",
"properties": {
"isSSHEnabled": {
"type": "boolean",
"nullable": true
},
"sshPublicKey": {
"type": "string",
"nullable": true
},
"isIPythonEnabled": {
"type": "boolean",
"nullable": true
},
"isTensorBoardEnabled": {
"type": "boolean",
"nullable": true
},
"interactivePort": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherK8sConfiguration": {
"type": "object",
"properties": {
"maxRetryCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"resourceConfiguration": {
"$ref": "#/components/schemas/AetherResourceConfig"
},
"priorityConfiguration": {
"$ref": "#/components/schemas/AetherPriorityConfig"
},
"interactiveConfiguration": {
"$ref": "#/components/schemas/AetherInteractiveConfig"
}
},
"additionalProperties": false
},
"AetherLegacyDataPath": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherLimitSettings": {
"type": "object",
"properties": {
"maxTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"trialTimeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"maxConcurrentTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxCoresPerTrial": {
"type": "integer",
"format": "int32",
"nullable": true
},
"exitScore": {
"type": "number",
"format": "double",
"nullable": true
},
"enableEarlyTermination": {
"type": "boolean",
"nullable": true
},
"maxNodes": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherLogVerbosity": {
"enum": [
"NotSet",
"Debug",
"Info",
"Warning",
"Error",
"Critical"
],
"type": "string"
},
"AetherMlcComputeInfo": {
"type": "object",
"properties": {
"mlcComputeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherModuleDeploymentSource": {
"enum": [
"Client",
"AutoDeployment",
"Vsts"
],
"type": "string"
},
"AetherModuleEntity": {
"type": "object",
"properties": {
"lastUpdatedBy": {
"$ref": "#/components/schemas/AetherCreatedBy"
},
"displayName": {
"type": "string",
"nullable": true
},
"moduleExecutionType": {
"type": "string",
"nullable": true
},
"moduleType": {
"$ref": "#/components/schemas/AetherModuleType"
},
"moduleTypeVersion": {
"type": "string",
"nullable": true
},
"resourceRequirements": {
"$ref": "#/components/schemas/AetherResourceModel"
},
"machineCluster": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"defaultComplianceCluster": {
"type": "string",
"nullable": true
},
"repositoryType": {
"$ref": "#/components/schemas/AetherRepositoryType"
},
"relativePathToSourceCode": {
"type": "string",
"nullable": true
},
"commitId": {
"type": "string",
"nullable": true
},
"codeReviewLink": {
"type": "string",
"nullable": true
},
"unitTestsAvailable": {
"type": "boolean"
},
"isCompressed": {
"type": "boolean"
},
"executionEnvironment": {
"$ref": "#/components/schemas/AetherExecutionEnvironment"
},
"isOutputMarkupEnabled": {
"type": "boolean"
},
"dockerImageId": {
"type": "string",
"nullable": true
},
"dockerImageReference": {
"type": "string",
"nullable": true
},
"dockerImageSecurityGroups": {
"type": "string",
"nullable": true
},
"extendedProperties": {
"$ref": "#/components/schemas/AetherModuleExtendedProperties"
},
"deploymentSource": {
"$ref": "#/components/schemas/AetherModuleDeploymentSource"
},
"deploymentSourceMetadata": {
"type": "string",
"nullable": true
},
"identifierHash": {
"type": "string",
"nullable": true
},
"identifierHashV2": {
"type": "string",
"nullable": true
},
"kvTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/AetherCreatedBy"
},
"runconfig": {
"type": "string",
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/AetherCloudSettings"
},
"category": {
"type": "string",
"nullable": true
},
"stepType": {
"type": "string",
"nullable": true
},
"stage": {
"type": "string",
"nullable": true
},
"uploadState": {
"$ref": "#/components/schemas/AetherUploadState"
},
"sourceCodeLocation": {
"type": "string",
"nullable": true
},
"sizeInBytes": {
"type": "integer",
"format": "int64"
},
"downloadLocation": {
"type": "string",
"nullable": true
},
"dataLocation": {
"$ref": "#/components/schemas/AetherDataLocation"
},
"scriptingRuntimeId": {
"type": "string",
"nullable": true
},
"interfaceDocumentation": {
"$ref": "#/components/schemas/AetherEntityInterfaceDocumentation"
},
"isEyesOn": {
"type": "boolean"
},
"complianceCluster": {
"type": "string",
"nullable": true
},
"isDeterministic": {
"type": "boolean"
},
"informationUrl": {
"type": "string",
"nullable": true
},
"isExperimentIdInParameters": {
"type": "boolean"
},
"interfaceString": {
"type": "string",
"nullable": true
},
"defaultParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"structuredInterface": {
"$ref": "#/components/schemas/AetherStructuredInterface"
},
"familyId": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"hash": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"sequenceNumberInFamily": {
"type": "integer",
"format": "int32"
},
"owner": {
"type": "string",
"nullable": true
},
"azureTenantId": {
"type": "string",
"nullable": true
},
"azureUserId": {
"type": "string",
"nullable": true
},
"collaborators": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"workspaceId": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"entityStatus": {
"$ref": "#/components/schemas/AetherEntityStatus"
}
},
"additionalProperties": false
},
"AetherModuleExtendedProperties": {
"type": "object",
"properties": {
"autoDeployedArtifact": {
"$ref": "#/components/schemas/AetherBuildArtifactInfo"
},
"scriptNeedsApproval": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherModuleHashVersion": {
"enum": [
"IdentifierHash",
"IdentifierHashV2"
],
"type": "string"
},
"AetherModuleType": {
"enum": [
"None",
"BatchInferencing"
],
"type": "string"
},
"AetherNCrossValidationMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"AetherNCrossValidations": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherNCrossValidationMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AetherOutputSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"DataStoreNameParameterAssignment": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"DataStoreModeParameterAssignment": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"PathOnComputeParameterAssignment": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"webServicePort": {
"type": "string",
"nullable": true
},
"datasetRegistration": {
"$ref": "#/components/schemas/AetherDatasetRegistration"
},
"datasetOutputOptions": {
"$ref": "#/components/schemas/AetherDatasetOutputOptions"
},
"AssetOutputSettings": {
"$ref": "#/components/schemas/AetherAssetOutputSettings"
},
"parameterName": {
"type": "string",
"nullable": true
},
"AssetOutputSettingsParameterName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherParallelForControlFlowInfo": {
"type": "object",
"properties": {
"parallelForItemsInput": {
"$ref": "#/components/schemas/AetherParameterAssignment"
}
},
"additionalProperties": false
},
"AetherParameterAssignment": {
"type": "object",
"properties": {
"valueType": {
"$ref": "#/components/schemas/AetherParameterValueType"
},
"assignmentsToConcatenate": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherParameterAssignment"
},
"nullable": true
},
"dataPathAssignment": {
"$ref": "#/components/schemas/AetherLegacyDataPath"
},
"dataSetDefinitionValueAssignment": {
"$ref": "#/components/schemas/AetherDataSetDefinitionValue"
},
"name": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherParameterType": {
"enum": [
"Int",
"Double",
"Bool",
"String",
"Undefined"
],
"type": "string"
},
"AetherParameterValueType": {
"enum": [
"Literal",
"GraphParameterName",
"Concatenate",
"Input",
"DataPath",
"DataSetDefinition"
],
"type": "string"
},
"AetherPhillyHdfsReference": {
"type": "object",
"properties": {
"cluster": {
"type": "string",
"nullable": true
},
"vc": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherPortInfo": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"graphPortName": {
"type": "string",
"nullable": true
},
"isParameter": {
"type": "boolean"
},
"webServicePort": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherPrimaryMetrics": {
"enum": [
"AUCWeighted",
"Accuracy",
"NormMacroRecall",
"AveragePrecisionScoreWeighted",
"PrecisionScoreWeighted",
"SpearmanCorrelation",
"NormalizedRootMeanSquaredError",
"R2Score",
"NormalizedMeanAbsoluteError",
"NormalizedRootMeanSquaredLogError",
"MeanAveragePrecision",
"Iou"
],
"type": "string"
},
"AetherPriorityConfig": {
"type": "object",
"properties": {
"jobPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isPreemptible": {
"type": "boolean",
"nullable": true
},
"nodeCountSet": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"scaleInterval": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherPriorityConfiguration": {
"type": "object",
"properties": {
"cloudPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"stringTypePriority": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherRegisteredDataSetReference": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherRemoteDockerComputeInfo": {
"type": "object",
"properties": {
"address": {
"type": "string",
"nullable": true
},
"username": {
"type": "string",
"nullable": true
},
"password": {
"type": "string",
"nullable": true
},
"privateKey": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherRepositoryType": {
"enum": [
"None",
"Other",
"Git",
"SourceDepot",
"Cosmos"
],
"type": "string"
},
"AetherResourceAssignment": {
"type": "object",
"properties": {
"attributes": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherResourceAttributeAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceAttributeAssignment": {
"type": "object",
"properties": {
"attribute": {
"$ref": "#/components/schemas/AetherResourceAttributeDefinition"
},
"operator": {
"$ref": "#/components/schemas/AetherResourceOperator"
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceAttributeDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/AetherResourceValueType"
},
"units": {
"type": "string",
"nullable": true
},
"allowedOperators": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherResourceOperator"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceConfig": {
"type": "object",
"properties": {
"gpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"cpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"memoryRequestInGB": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceConfiguration": {
"type": "object",
"properties": {
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"instanceType": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"locations": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"instancePriority": {
"type": "string",
"nullable": true
},
"quotaEnforcementResourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceModel": {
"type": "object",
"properties": {
"resources": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherResourceAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherResourceOperator": {
"enum": [
"Equal",
"Contain",
"GreaterOrEqual"
],
"type": "string"
},
"AetherResourceValueType": {
"enum": [
"String",
"Double"
],
"type": "string"
},
"AetherResourcesSetting": {
"type": "object",
"properties": {
"instanceSize": {
"type": "string",
"nullable": true
},
"sparkVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherSamplingAlgorithmType": {
"enum": [
"Random",
"Grid",
"Bayesian"
],
"type": "string"
},
"AetherSavedDataSetReference": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherScopeCloudConfiguration": {
"type": "object",
"properties": {
"inputPathSuffixes": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"description": "This is a dictionary",
"nullable": true
},
"outputPathSuffixes": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"description": "This is a dictionary",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"tokens": {
"type": "integer",
"format": "int32",
"nullable": true
},
"autoToken": {
"type": "integer",
"format": "int32",
"nullable": true
},
"vcp": {
"type": "number",
"format": "float",
"nullable": true
}
},
"additionalProperties": false
},
"AetherSeasonality": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherSeasonalityMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AetherSeasonalityMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"AetherShortSeriesHandlingConfiguration": {
"enum": [
"Auto",
"Pad",
"Drop"
],
"type": "string"
},
"AetherSqlDataPath": {
"type": "object",
"properties": {
"sqlTableName": {
"type": "string",
"nullable": true
},
"sqlQuery": {
"type": "string",
"nullable": true
},
"sqlStoredProcedureName": {
"type": "string",
"nullable": true
},
"sqlStoredProcedureParams": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStoredProcedureParameter"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherStackEnsembleSettings": {
"type": "object",
"properties": {
"stackMetaLearnerType": {
"$ref": "#/components/schemas/AetherStackMetaLearnerType"
},
"stackMetaLearnerTrainPercentage": {
"type": "number",
"format": "double",
"nullable": true
},
"stackMetaLearnerKWargs": {
"nullable": true
}
},
"additionalProperties": false
},
"AetherStackMetaLearnerType": {
"enum": [
"None",
"LogisticRegression",
"LogisticRegressionCV",
"LightGBMClassifier",
"ElasticNet",
"ElasticNetCV",
"LightGBMRegressor",
"LinearRegression"
],
"type": "string"
},
"AetherStoredProcedureParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/AetherStoredProcedureParameterType"
}
},
"additionalProperties": false
},
"AetherStoredProcedureParameterType": {
"enum": [
"String",
"Int",
"Decimal",
"Guid",
"Boolean",
"Date"
],
"type": "string"
},
"AetherStructuredInterface": {
"type": "object",
"properties": {
"commandLinePattern": {
"type": "string",
"nullable": true
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStructuredInterfaceInput"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStructuredInterfaceOutput"
},
"nullable": true
},
"controlOutputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherControlOutput"
},
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStructuredInterfaceParameter"
},
"nullable": true
},
"metadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherStructuredInterfaceParameter"
},
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherStructuredInterfaceInput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"dataTypeIdsList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"isOptional": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"skipProcessing": {
"type": "boolean"
},
"isResource": {
"type": "boolean"
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"datasetTypes": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/AetherDatasetType"
},
"nullable": true
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherStructuredInterfaceOutput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"dataTypeId": {
"type": "string",
"nullable": true
},
"passThroughDataTypeInputName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"skipProcessing": {
"type": "boolean"
},
"isArtifact": {
"type": "boolean"
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AetherDataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"trainingOutput": {
"$ref": "#/components/schemas/AetherTrainingOutput"
},
"datasetOutput": {
"$ref": "#/components/schemas/AetherDatasetOutput"
},
"AssetOutputSettings": {
"$ref": "#/components/schemas/AetherAssetOutputSettings"
},
"earlyAvailable": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherStructuredInterfaceParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"parameterType": {
"$ref": "#/components/schemas/AetherParameterType"
},
"isOptional": {
"type": "boolean"
},
"defaultValue": {
"type": "string",
"nullable": true
},
"lowerBound": {
"type": "string",
"nullable": true
},
"upperBound": {
"type": "string",
"nullable": true
},
"enumValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enumValuesToArgumentStrings": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"setEnvironmentVariable": {
"type": "boolean"
},
"environmentVariableOverride": {
"type": "string",
"nullable": true
},
"enabledByParameterName": {
"type": "string",
"nullable": true
},
"enabledByParameterValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"uiHint": {
"$ref": "#/components/schemas/AetherUIParameterHint"
},
"groupNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"argumentName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherSubGraphConfiguration": {
"type": "object",
"properties": {
"graphId": {
"type": "string",
"nullable": true
},
"graphDraftId": {
"type": "string",
"nullable": true
},
"defaultComputeInternal": {
"$ref": "#/components/schemas/AetherComputeSetting"
},
"defaultDatastoreInternal": {
"$ref": "#/components/schemas/AetherDatastoreSetting"
},
"DefaultCloudPriority": {
"$ref": "#/components/schemas/AetherCloudPrioritySetting"
},
"UserAlias": {
"type": "string",
"nullable": true
},
"IsDynamic": {
"type": "boolean",
"default": false,
"nullable": true
}
},
"additionalProperties": false
},
"AetherSweepEarlyTerminationPolicy": {
"type": "object",
"properties": {
"policyType": {
"$ref": "#/components/schemas/AetherEarlyTerminationPolicyType"
},
"evaluationInterval": {
"type": "integer",
"format": "int32",
"nullable": true
},
"delayEvaluation": {
"type": "integer",
"format": "int32",
"nullable": true
},
"slackFactor": {
"type": "number",
"format": "float",
"nullable": true
},
"slackAmount": {
"type": "number",
"format": "float",
"nullable": true
},
"truncationPercentage": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherSweepSettings": {
"type": "object",
"properties": {
"limits": {
"$ref": "#/components/schemas/AetherSweepSettingsLimits"
},
"searchSpace": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"nullable": true
},
"samplingAlgorithm": {
"$ref": "#/components/schemas/AetherSamplingAlgorithmType"
},
"earlyTermination": {
"$ref": "#/components/schemas/AetherSweepEarlyTerminationPolicy"
}
},
"additionalProperties": false
},
"AetherSweepSettingsLimits": {
"type": "object",
"properties": {
"maxTotalTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxConcurrentTrials": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherTabularTrainingMode": {
"enum": [
"Distributed",
"NonDistributed",
"Auto"
],
"type": "string"
},
"AetherTargetAggregationFunction": {
"enum": [
"Sum",
"Max",
"Min",
"Mean"
],
"type": "string"
},
"AetherTargetLags": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherTargetLagsMode"
},
"values": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherTargetLagsMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"AetherTargetRollingWindowSize": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/AetherTargetRollingWindowSizeMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AetherTargetRollingWindowSizeMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"AetherTargetSelectorConfiguration": {
"type": "object",
"properties": {
"lowPriorityVMTolerant": {
"type": "boolean"
},
"clusterBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
},
"instanceType": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"instanceTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"myResourceOnly": {
"type": "boolean"
},
"planId": {
"type": "string",
"nullable": true
},
"planRegionId": {
"type": "string",
"nullable": true
},
"region": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"regions": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"vcBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"AetherTaskType": {
"enum": [
"Classification",
"Regression",
"Forecasting",
"ImageClassification",
"ImageClassificationMultilabel",
"ImageObjectDetection",
"ImageInstanceSegmentation",
"TextClassification",
"TextMultiLabeling",
"TextNER",
"TextClassificationMultilabel"
],
"type": "string"
},
"AetherTestDataSettings": {
"type": "object",
"properties": {
"testDataSize": {
"type": "number",
"format": "double",
"nullable": true
}
},
"additionalProperties": false
},
"AetherTorchDistributedConfiguration": {
"type": "object",
"properties": {
"processCountPerNode": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AetherTrainingOutput": {
"type": "object",
"properties": {
"trainingOutputType": {
"$ref": "#/components/schemas/AetherTrainingOutputType"
},
"iteration": {
"type": "integer",
"format": "int32",
"nullable": true
},
"metric": {
"type": "string",
"nullable": true
},
"modelFile": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherTrainingOutputType": {
"enum": [
"Metrics",
"Model"
],
"type": "string"
},
"AetherTrainingSettings": {
"type": "object",
"properties": {
"blockListModels": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"allowListModels": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enableDnnTraining": {
"type": "boolean",
"nullable": true
},
"enableOnnxCompatibleModels": {
"type": "boolean",
"nullable": true
},
"stackEnsembleSettings": {
"$ref": "#/components/schemas/AetherStackEnsembleSettings"
},
"enableStackEnsemble": {
"type": "boolean",
"nullable": true
},
"enableVoteEnsemble": {
"type": "boolean",
"nullable": true
},
"ensembleModelDownloadTimeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"enableModelExplainability": {
"type": "boolean",
"nullable": true
},
"trainingMode": {
"$ref": "#/components/schemas/AetherTabularTrainingMode"
}
},
"additionalProperties": false
},
"AetherUIAzureOpenAIDeploymentNameSelector": {
"type": "object",
"properties": {
"Capabilities": {
"$ref": "#/components/schemas/AetherUIAzureOpenAIModelCapabilities"
}
},
"additionalProperties": false
},
"AetherUIAzureOpenAIModelCapabilities": {
"type": "object",
"properties": {
"Completion": {
"type": "boolean",
"nullable": true
},
"ChatCompletion": {
"type": "boolean",
"nullable": true
},
"Embeddings": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"AetherUIColumnPicker": {
"type": "object",
"properties": {
"columnPickerFor": {
"type": "string",
"nullable": true
},
"columnSelectionCategories": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"singleColumnSelection": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherUIJsonEditor": {
"type": "object",
"properties": {
"jsonSchema": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherUIParameterHint": {
"type": "object",
"properties": {
"uiWidgetType": {
"$ref": "#/components/schemas/AetherUIWidgetTypeEnum"
},
"columnPicker": {
"$ref": "#/components/schemas/AetherUIColumnPicker"
},
"uiScriptLanguage": {
"$ref": "#/components/schemas/AetherUIScriptLanguageEnum"
},
"jsonEditor": {
"$ref": "#/components/schemas/AetherUIJsonEditor"
},
"PromptFlowConnectionSelector": {
"$ref": "#/components/schemas/AetherUIPromptFlowConnectionSelector"
},
"AzureOpenAIDeploymentNameSelector": {
"$ref": "#/components/schemas/AetherUIAzureOpenAIDeploymentNameSelector"
},
"UxIgnore": {
"type": "boolean"
},
"Anonymous": {
"type": "boolean"
}
},
"additionalProperties": false
},
"AetherUIPromptFlowConnectionSelector": {
"type": "object",
"properties": {
"PromptFlowConnectionType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherUIScriptLanguageEnum": {
"enum": [
"None",
"Python",
"R",
"Json",
"Sql"
],
"type": "string"
},
"AetherUIWidgetTypeEnum": {
"enum": [
"Default",
"Mode",
"ColumnPicker",
"Credential",
"Script",
"ComputeSelection",
"JsonEditor",
"SearchSpaceParameter",
"SectionToggle",
"YamlEditor",
"EnableRuntimeSweep",
"DataStoreSelection",
"InstanceTypeSelection",
"ConnectionSelection",
"PromptFlowConnectionSelection",
"AzureOpenAIDeploymentNameSelection"
],
"type": "string"
},
"AetherUploadState": {
"enum": [
"Uploading",
"Completed",
"Canceled",
"Failed"
],
"type": "string"
},
"AetherUseStl": {
"enum": [
"Season",
"SeasonTrend"
],
"type": "string"
},
"AetherValidationDataSettings": {
"type": "object",
"properties": {
"nCrossValidations": {
"$ref": "#/components/schemas/AetherNCrossValidations"
},
"validationDataSize": {
"type": "number",
"format": "double",
"nullable": true
},
"cvSplitColumnNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"validationType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherVsoBuildArtifactInfo": {
"type": "object",
"properties": {
"buildInfo": {
"$ref": "#/components/schemas/AetherVsoBuildInfo"
},
"downloadUrl": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AetherVsoBuildDefinitionInfo": {
"type": "object",
"properties": {
"accountName": {
"type": "string",
"nullable": true
},
"projectId": {
"type": "string",
"format": "uuid"
},
"buildDefinitionId": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AetherVsoBuildInfo": {
"type": "object",
"properties": {
"definitionInfo": {
"$ref": "#/components/schemas/AetherVsoBuildDefinitionInfo"
},
"buildId": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AmlDataset": {
"type": "object",
"properties": {
"registeredDataSetReference": {
"$ref": "#/components/schemas/RegisteredDataSetReference"
},
"savedDataSetReference": {
"$ref": "#/components/schemas/SavedDataSetReference"
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AmlK8sConfiguration": {
"type": "object",
"properties": {
"resourceConfiguration": {
"$ref": "#/components/schemas/ResourceConfiguration"
},
"priorityConfiguration": {
"$ref": "#/components/schemas/AmlK8sPriorityConfiguration"
},
"interactiveConfiguration": {
"$ref": "#/components/schemas/InteractiveConfiguration"
}
},
"additionalProperties": false
},
"AmlK8sPriorityConfiguration": {
"type": "object",
"properties": {
"jobPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isPreemptible": {
"type": "boolean",
"nullable": true
},
"nodeCountSet": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"scaleInterval": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AmlSparkCloudSetting": {
"type": "object",
"properties": {
"entry": {
"$ref": "#/components/schemas/EntrySetting"
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"driverMemory": {
"type": "string",
"nullable": true
},
"driverCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"executorMemory": {
"type": "string",
"nullable": true
},
"executorCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numberExecutors": {
"type": "integer",
"format": "int32",
"nullable": true
},
"environmentAssetId": {
"type": "string",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"inlineEnvironmentDefinitionString": {
"type": "string",
"nullable": true
},
"conf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"compute": {
"type": "string",
"nullable": true
},
"resources": {
"$ref": "#/components/schemas/ResourcesSetting"
},
"identity": {
"$ref": "#/components/schemas/IdentitySetting"
}
},
"additionalProperties": false
},
"ApiAndParameters": {
"type": "object",
"properties": {
"api": {
"type": "string",
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowToolSettingParameter"
},
"description": "This is a dictionary",
"nullable": true
},
"default_prompt": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ApplicationEndpointConfiguration": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/ApplicationEndpointType"
},
"port": {
"type": "integer",
"format": "int32",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"nodes": {
"$ref": "#/components/schemas/Nodes"
}
},
"additionalProperties": false
},
"ApplicationEndpointType": {
"enum": [
"Jupyter",
"JupyterLab",
"SSH",
"TensorBoard",
"VSCode",
"Theia",
"Grafana",
"Custom",
"RayDashboard"
],
"type": "string"
},
"ArgumentAssignment": {
"type": "object",
"properties": {
"valueType": {
"$ref": "#/components/schemas/ArgumentValueType"
},
"value": {
"type": "string",
"nullable": true
},
"nestedArgumentList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"nullable": true
},
"stringInterpolationArgumentList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"ArgumentValueType": {
"enum": [
"Literal",
"Parameter",
"Input",
"Output",
"NestedList",
"StringInterpolationList"
],
"type": "string"
},
"Asset": {
"type": "object",
"properties": {
"assetId": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AssetDefinition": {
"type": "object",
"properties": {
"path": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/AEVAAssetType"
},
"assetId": {
"type": "string",
"nullable": true
},
"serializedAssetId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AssetNameAndVersionIdentifier": {
"type": "object",
"properties": {
"assetName": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"feedName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AssetOutputSettings": {
"type": "object",
"properties": {
"path": {
"type": "string",
"nullable": true
},
"PathParameterAssignment": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"type": {
"$ref": "#/components/schemas/AEVAAssetType"
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AssetOutputSettingsParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"defaultValue": {
"$ref": "#/components/schemas/AssetOutputSettings"
}
},
"additionalProperties": false
},
"AssetPublishResult": {
"type": "object",
"properties": {
"feedName": {
"type": "string",
"nullable": true
},
"assetName": {
"type": "string",
"nullable": true
},
"assetVersion": {
"type": "string",
"nullable": true
},
"stepName": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"errorMessage": {
"type": "string",
"nullable": true
},
"createdTime": {
"type": "string",
"format": "date-time"
},
"lastUpdatedTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"regionalPublishResults": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetPublishSingleRegionResult"
},
"nullable": true
}
},
"additionalProperties": false
},
"AssetPublishSingleRegionResult": {
"type": "object",
"properties": {
"stepName": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"errorMessage": {
"type": "string",
"nullable": true
},
"lastUpdatedTime": {
"type": "string",
"format": "date-time"
},
"totalSteps": {
"type": "integer",
"format": "int32"
},
"finishedSteps": {
"type": "integer",
"format": "int32"
},
"remainingSteps": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"AssetScopeTypes": {
"enum": [
"Workspace",
"Global",
"Feed",
"All"
],
"type": "string"
},
"AssetSourceType": {
"enum": [
"Unknown",
"Local",
"GithubFile",
"GithubFolder",
"DevopsArtifactsZip"
],
"type": "string"
},
"AssetType": {
"enum": [
"Component",
"Model",
"Environment",
"Dataset",
"DataStore",
"SampleGraph",
"FlowTool",
"FlowToolSetting",
"FlowConnection",
"FlowSample",
"FlowRuntimeSpec"
],
"type": "string"
},
"AssetTypeMetaInfo": {
"type": "object",
"properties": {
"consumptionMode": {
"$ref": "#/components/schemas/ConsumeMode"
}
},
"additionalProperties": false
},
"AssetVersionPublishRequest": {
"type": "object",
"properties": {
"assetType": {
"$ref": "#/components/schemas/AssetType"
},
"assetSourceType": {
"$ref": "#/components/schemas/AssetSourceType"
},
"yamlFile": {
"type": "string",
"nullable": true
},
"sourceZipUrl": {
"type": "string",
"nullable": true
},
"sourceZipFile": {
"type": "string",
"format": "binary",
"nullable": true
},
"feedName": {
"type": "string",
"nullable": true
},
"setAsDefaultVersion": {
"type": "boolean"
},
"referencedAssets": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AssetNameAndVersionIdentifier"
},
"nullable": true
},
"flowFile": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AssignedUser": {
"type": "object",
"properties": {
"objectId": {
"type": "string",
"nullable": true
},
"tenantId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AttachCosmosRequest": {
"type": "object",
"properties": {
"accountEndpoint": {
"type": "string",
"nullable": true
},
"resourceArmId": {
"type": "string",
"nullable": true
},
"databaseName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AuthKeys": {
"type": "object",
"properties": {
"primaryKey": {
"type": "string",
"nullable": true
},
"secondaryKey": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AutoClusterComputeSpecification": {
"type": "object",
"properties": {
"instanceSize": {
"type": "string",
"nullable": true
},
"instancePriority": {
"type": "string",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"location": {
"type": "string",
"nullable": true
},
"runtimeVersion": {
"type": "string",
"nullable": true
},
"quotaEnforcementResourceId": {
"type": "string",
"nullable": true
},
"modelComputeSpecificationId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AutoDeleteCondition": {
"enum": [
"CreatedGreaterThan",
"LastAccessedGreaterThan"
],
"type": "string"
},
"AutoDeleteSetting": {
"type": "object",
"properties": {
"condition": {
"$ref": "#/components/schemas/AutoDeleteCondition"
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AutoFeaturizeConfiguration": {
"type": "object",
"properties": {
"featurizationConfig": {
"$ref": "#/components/schemas/FeaturizationSettings"
}
},
"additionalProperties": false
},
"AutoMLComponentConfiguration": {
"type": "object",
"properties": {
"autoTrainConfig": {
"$ref": "#/components/schemas/AutoTrainConfiguration"
},
"autoFeaturizeConfig": {
"$ref": "#/components/schemas/AutoFeaturizeConfiguration"
}
},
"additionalProperties": false
},
"AutoScaler": {
"type": "object",
"properties": {
"autoscaleEnabled": {
"type": "boolean",
"nullable": true
},
"minReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
},
"targetUtilization": {
"type": "integer",
"format": "int32",
"nullable": true
},
"refreshPeriodInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"AutoTrainConfiguration": {
"type": "object",
"properties": {
"generalSettings": {
"$ref": "#/components/schemas/GeneralSettings"
},
"limitSettings": {
"$ref": "#/components/schemas/LimitSettings"
},
"dataSettings": {
"$ref": "#/components/schemas/DataSettings"
},
"forecastingSettings": {
"$ref": "#/components/schemas/ForecastingSettings"
},
"trainingSettings": {
"$ref": "#/components/schemas/TrainingSettings"
},
"sweepSettings": {
"$ref": "#/components/schemas/SweepSettings"
},
"imageModelSettings": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"computeConfiguration": {
"$ref": "#/components/schemas/AEVAComputeConfiguration"
},
"resourceConfigurtion": {
"$ref": "#/components/schemas/AEVAResourceConfiguration"
},
"environmentId": {
"type": "string",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"AutologgerSettings": {
"type": "object",
"properties": {
"mlFlowAutologger": {
"$ref": "#/components/schemas/MLFlowAutologgerState"
}
},
"additionalProperties": false
},
"AvailabilityResponse": {
"type": "object",
"properties": {
"isAvailable": {
"type": "boolean"
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
}
},
"additionalProperties": false
},
"AzureBlobReference": {
"type": "object",
"properties": {
"container": {
"type": "string",
"nullable": true
},
"sasToken": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureDataLakeGen2Reference": {
"type": "object",
"properties": {
"fileSystemName": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureDataLakeReference": {
"type": "object",
"properties": {
"tenant": {
"type": "string",
"nullable": true
},
"subscription": {
"type": "string",
"nullable": true
},
"resourceGroup": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureDatabaseReference": {
"type": "object",
"properties": {
"tableName": {
"type": "string",
"nullable": true
},
"sqlQuery": {
"type": "string",
"nullable": true
},
"storedProcedureName": {
"type": "string",
"nullable": true
},
"storedProcedureParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StoredProcedureParameter"
},
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureFilesReference": {
"type": "object",
"properties": {
"share": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"account": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureMLModuleVersionDescriptor": {
"type": "object",
"properties": {
"moduleVersionId": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"AzureOpenAIDeploymentDto": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"modelName": {
"type": "string",
"nullable": true
},
"capabilities": {
"$ref": "#/components/schemas/AzureOpenAIModelCapabilities"
}
},
"additionalProperties": false
},
"AzureOpenAIModelCapabilities": {
"type": "object",
"properties": {
"completion": {
"type": "boolean",
"nullable": true
},
"chat_completion": {
"type": "boolean",
"nullable": true
},
"embeddings": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"BatchAiComputeInfo": {
"type": "object",
"properties": {
"batchAiSubscriptionId": {
"type": "string",
"nullable": true
},
"batchAiResourceGroup": {
"type": "string",
"nullable": true
},
"batchAiWorkspaceName": {
"type": "string",
"nullable": true
},
"clusterName": {
"type": "string",
"nullable": true
},
"nativeSharedDirectory": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"BatchDataInput": {
"type": "object",
"properties": {
"dataUri": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"BatchExportComponentSpecResponse": {
"type": "object",
"properties": {
"componentSpecMetaInfos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ComponentSpecMetaInfo"
},
"nullable": true
},
"errors": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ErrorResponse"
},
"nullable": true
}
},
"additionalProperties": false
},
"BatchExportRawComponentResponse": {
"type": "object",
"properties": {
"rawComponentDtos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RawComponentDto"
},
"nullable": true
},
"errors": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ErrorResponse"
},
"nullable": true
}
},
"additionalProperties": false
},
"BatchGetComponentHashesRequest": {
"type": "object",
"properties": {
"moduleHashVersion": {
"$ref": "#/components/schemas/AetherModuleHashVersion"
},
"moduleEntities": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AetherModuleEntity"
},
"nullable": true
}
},
"additionalProperties": false
},
"BatchGetComponentRequest": {
"type": "object",
"properties": {
"versionIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"nameAndVersions": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ComponentNameMetaInfo"
},
"nullable": true
}
},
"additionalProperties": false
},
"Binding": {
"type": "object",
"properties": {
"bindingType": {
"$ref": "#/components/schemas/BindingType"
}
},
"additionalProperties": false
},
"BindingType": {
"enum": [
"Basic"
],
"type": "string"
},
"BuildContextLocationType": {
"enum": [
"Git",
"StorageAccount"
],
"type": "string"
},
"BulkTestDto": {
"type": "object",
"properties": {
"bulkTestId": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtime": {
"type": "string",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdOn": {
"type": "string",
"format": "date-time",
"nullable": true
},
"evaluationCount": {
"type": "integer",
"format": "int32"
},
"variantCount": {
"type": "integer",
"format": "int32"
},
"flowSubmitRunSettings": {
"$ref": "#/components/schemas/FlowSubmitRunSettings"
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowInputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowOutputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
}
},
"additionalProperties": false
},
"CloudError": {
"type": "object",
"properties": {
"code": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"details": {
"type": "array",
"items": {
"$ref": "#/components/schemas/CloudError"
},
"nullable": true,
"readOnly": true
},
"additionalInfo": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AdditionalErrorInfo"
},
"nullable": true,
"readOnly": true
}
},
"additionalProperties": false
},
"CloudPrioritySetting": {
"type": "object",
"properties": {
"scopePriority": {
"$ref": "#/components/schemas/PriorityConfiguration"
},
"AmlComputePriority": {
"$ref": "#/components/schemas/PriorityConfiguration"
},
"ItpPriority": {
"$ref": "#/components/schemas/PriorityConfiguration"
},
"SingularityPriority": {
"$ref": "#/components/schemas/PriorityConfiguration"
}
},
"additionalProperties": false
},
"CloudSettings": {
"type": "object",
"properties": {
"linkedSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"priorityConfig": {
"$ref": "#/components/schemas/PriorityConfiguration"
},
"hdiRunConfig": {
"$ref": "#/components/schemas/HdiRunConfiguration"
},
"subGraphConfig": {
"$ref": "#/components/schemas/SubGraphConfiguration"
},
"autoMLComponentConfig": {
"$ref": "#/components/schemas/AutoMLComponentConfiguration"
},
"apCloudConfig": {
"$ref": "#/components/schemas/APCloudConfiguration"
},
"scopeCloudConfig": {
"$ref": "#/components/schemas/ScopeCloudConfiguration"
},
"esCloudConfig": {
"$ref": "#/components/schemas/EsCloudConfiguration"
},
"dataTransferCloudConfig": {
"$ref": "#/components/schemas/DataTransferCloudConfiguration"
},
"amlSparkCloudSetting": {
"$ref": "#/components/schemas/AmlSparkCloudSetting"
},
"dataTransferV2CloudSetting": {
"$ref": "#/components/schemas/DataTransferV2CloudSetting"
}
},
"additionalProperties": false
},
"ColumnTransformer": {
"type": "object",
"properties": {
"fields": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"parameters": {
"nullable": true
}
},
"additionalProperties": false
},
"CommandJob": {
"type": "object",
"properties": {
"jobType": {
"$ref": "#/components/schemas/JobType"
},
"codeId": {
"type": "string",
"nullable": true
},
"command": {
"minLength": 1,
"type": "string",
"nullable": true
},
"environmentId": {
"type": "string",
"nullable": true
},
"inputDataBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDataBinding"
},
"nullable": true
},
"outputDataBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputDataBinding"
},
"nullable": true
},
"distribution": {
"$ref": "#/components/schemas/DistributionConfiguration"
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"autologgerSettings": {
"$ref": "#/components/schemas/MfeInternalAutologgerSettings"
},
"limits": {
"$ref": "#/components/schemas/CommandJobLimits"
},
"provisioningState": {
"$ref": "#/components/schemas/JobProvisioningState"
},
"parentJobName": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"interactionEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobEndpoint"
},
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/MfeInternalIdentityConfiguration"
},
"compute": {
"$ref": "#/components/schemas/ComputeConfiguration"
},
"priority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"output": {
"$ref": "#/components/schemas/JobOutputArtifacts"
},
"isArchived": {
"type": "boolean"
},
"schedule": {
"$ref": "#/components/schemas/ScheduleBase"
},
"componentId": {
"type": "string",
"nullable": true
},
"notificationSetting": {
"$ref": "#/components/schemas/NotificationSetting"
},
"secretsConfiguration": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MfeInternalSecretConfiguration"
},
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"CommandJobLimits": {
"type": "object",
"properties": {
"jobLimitsType": {
"$ref": "#/components/schemas/JobLimitsType"
},
"timeout": {
"type": "string",
"format": "date-span",
"nullable": true
}
},
"additionalProperties": false
},
"CommandReturnCodeConfig": {
"type": "object",
"properties": {
"returnCode": {
"$ref": "#/components/schemas/SuccessfulCommandReturnCode"
},
"successfulReturnCodes": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
}
},
"additionalProperties": false
},
"Communicator": {
"enum": [
"None",
"ParameterServer",
"Gloo",
"Mpi",
"Nccl",
"ParallelTask"
],
"type": "string"
},
"ComponentConfiguration": {
"type": "object",
"properties": {
"componentIdentifier": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentInput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"optional": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
},
"default": {
"type": "string",
"nullable": true
},
"enum": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"min": {
"type": "string",
"nullable": true
},
"max": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentJob": {
"type": "object",
"properties": {
"compute": {
"$ref": "#/components/schemas/ComputeConfiguration"
},
"componentId": {
"type": "string",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComponentJobInput"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComponentJobOutput"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentJobInput": {
"type": "object",
"properties": {
"data": {
"$ref": "#/components/schemas/InputData"
},
"inputBinding": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentJobOutput": {
"type": "object",
"properties": {
"data": {
"$ref": "#/components/schemas/MfeInternalOutputData"
},
"outputBinding": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentNameAndDefaultVersion": {
"type": "object",
"properties": {
"componentName": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"feedName": {
"type": "string",
"nullable": true
},
"registryName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentNameMetaInfo": {
"type": "object",
"properties": {
"feedName": {
"type": "string",
"nullable": true
},
"componentName": {
"type": "string",
"nullable": true
},
"componentVersion": {
"type": "string",
"nullable": true
},
"registryName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentOutput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentPreflightResult": {
"type": "object",
"properties": {
"errorDetails": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RootError"
},
"nullable": true
}
},
"additionalProperties": false
},
"ComponentRegistrationTypeEnum": {
"enum": [
"Normal",
"AnonymousAmlModule",
"AnonymousAmlModuleVersion",
"ModuleEntityOnly"
],
"type": "string"
},
"ComponentSpecMetaInfo": {
"type": "object",
"properties": {
"componentSpec": {
"nullable": true
},
"componentVersion": {
"type": "string",
"nullable": true
},
"isAnonymous": {
"type": "boolean"
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"componentName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"isArchived": {
"type": "boolean"
}
},
"additionalProperties": false
},
"ComponentType": {
"enum": [
"Unknown",
"CommandComponent",
"Command"
],
"type": "string"
},
"ComponentUpdateRequest": {
"type": "object",
"properties": {
"originalModuleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"updateModuleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"moduleName": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"overwriteWithOriginalNameAndVersion": {
"type": "boolean",
"nullable": true
},
"snapshotId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComponentValidationRequest": {
"type": "object",
"properties": {
"componentIdentifier": {
"type": "string",
"nullable": true
},
"computeIdentity": {
"$ref": "#/components/schemas/ComputeIdentityDto"
},
"executionContextDto": {
"$ref": "#/components/schemas/ExecutionContextDto"
},
"environmentDefinition": {
"$ref": "#/components/schemas/EnvironmentDefinitionDto"
},
"dataPortDtos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataPortDto"
},
"nullable": true
}
},
"additionalProperties": false
},
"ComponentValidationResponse": {
"type": "object",
"properties": {
"status": {
"$ref": "#/components/schemas/ValidationStatus"
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
}
},
"additionalProperties": false
},
"Compute": {
"type": "object",
"properties": {
"target": {
"type": "string",
"nullable": true
},
"targetType": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"instanceType": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"gpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"priority": {
"type": "string",
"nullable": true
},
"region": {
"type": "string",
"nullable": true
},
"armId": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeConfiguration": {
"type": "object",
"properties": {
"target": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxInstanceCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isLocal": {
"type": "boolean"
},
"location": {
"type": "string",
"nullable": true
},
"isClusterless": {
"type": "boolean"
},
"instanceType": {
"type": "string",
"nullable": true
},
"instancePriority": {
"type": "string",
"nullable": true
},
"jobPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"shmSize": {
"type": "string",
"nullable": true
},
"dockerArgs": {
"type": "string",
"nullable": true
},
"locations": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"ComputeContract": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true,
"readOnly": true
},
"location": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/ComputeIdentityContract"
},
"properties": {
"$ref": "#/components/schemas/ComputeProperties"
}
},
"additionalProperties": false
},
"ComputeDetails": {
"type": "object"
},
"ComputeEnvironmentType": {
"enum": [
"ACI",
"AKS",
"AMLCOMPUTE",
"IOT",
"AKSENDPOINT",
"MIRSINGLEMODEL",
"MIRAMLCOMPUTE",
"MIRGA",
"AMLARC",
"BATCHAMLCOMPUTE",
"UNKNOWN"
],
"type": "string"
},
"ComputeIdentityContract": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"systemIdentityUrl": {
"type": "string",
"nullable": true
},
"principalId": {
"type": "string",
"nullable": true
},
"tenantId": {
"type": "string",
"nullable": true
},
"clientId": {
"type": "string",
"nullable": true
},
"clientSecretUrl": {
"type": "string",
"nullable": true
},
"userAssignedIdentities": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComputeRPUserAssignedIdentity"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeIdentityDto": {
"type": "object",
"properties": {
"computeName": {
"type": "string",
"nullable": true
},
"computeTargetType": {
"$ref": "#/components/schemas/ComputeTargetType"
},
"intellectualPropertyPublisher": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeInfo": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"computeType": {
"$ref": "#/components/schemas/ComputeEnvironmentType"
},
"isSslEnabled": {
"type": "boolean"
},
"isGpuType": {
"type": "boolean"
},
"clusterPurpose": {
"type": "string",
"nullable": true
},
"publicIpAddress": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeProperties": {
"required": [
"computeType"
],
"type": "object",
"properties": {
"createdOn": {
"type": "string",
"format": "date-time"
},
"modifiedOn": {
"type": "string",
"format": "date-time"
},
"disableLocalAuth": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"resourceId": {
"type": "string",
"nullable": true
},
"computeType": {
"minLength": 1,
"type": "string"
},
"computeLocation": {
"type": "string",
"nullable": true
},
"provisioningState": {
"$ref": "#/components/schemas/ProvisioningState"
},
"provisioningErrors": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ODataErrorResponse"
},
"nullable": true
},
"provisioningWarnings": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"isAttachedCompute": {
"type": "boolean"
},
"properties": {
"$ref": "#/components/schemas/ComputeDetails"
},
"status": {
"$ref": "#/components/schemas/ComputeStatus"
},
"warnings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ComputeWarning"
},
"nullable": true
}
},
"additionalProperties": false
},
"ComputeRPUserAssignedIdentity": {
"type": "object",
"properties": {
"principalId": {
"type": "string",
"nullable": true
},
"tenantId": {
"type": "string",
"nullable": true
},
"clientId": {
"type": "string",
"nullable": true
},
"clientSecretUrl": {
"type": "string",
"nullable": true
},
"resourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeRequest": {
"type": "object",
"properties": {
"nodeCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"gpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"computeType": {
"$ref": "#/components/schemas/ComputeType"
},
"batchAiComputeInfo": {
"$ref": "#/components/schemas/BatchAiComputeInfo"
},
"remoteDockerComputeInfo": {
"$ref": "#/components/schemas/RemoteDockerComputeInfo"
},
"hdiClusterComputeInfo": {
"$ref": "#/components/schemas/HdiClusterComputeInfo"
},
"mlcComputeInfo": {
"$ref": "#/components/schemas/MlcComputeInfo"
},
"databricksComputeInfo": {
"$ref": "#/components/schemas/DatabricksComputeInfo"
}
},
"additionalProperties": false
},
"ComputeStatus": {
"type": "object",
"properties": {
"isStatusAvailable": {
"type": "boolean",
"readOnly": true
},
"detailedStatus": {
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ODataError"
}
},
"additionalProperties": false
},
"ComputeStatusDetail": {
"type": "object",
"properties": {
"provisioningState": {
"type": "string",
"nullable": true
},
"provisioningErrorMessage": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ComputeTargetType": {
"enum": [
"Local",
"Remote",
"HdiCluster",
"ContainerInstance",
"AmlCompute",
"ComputeInstance",
"Cmk8s",
"SynapseSpark",
"Kubernetes",
"Aisc",
"GlobalJobDispatcher",
"Databricks",
"MockedCompute"
],
"type": "string"
},
"ComputeType": {
"enum": [
"BatchAi",
"MLC",
"HdiCluster",
"RemoteDocker",
"Databricks",
"Aisc"
],
"type": "string"
},
"ComputeWarning": {
"type": "object",
"properties": {
"title": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
},
"code": {
"type": "string",
"nullable": true
},
"severity": {
"$ref": "#/components/schemas/SeverityLevel"
}
},
"additionalProperties": false
},
"ConfigValueType": {
"enum": [
"String",
"Secret"
],
"type": "string"
},
"ConnectionCategory": {
"enum": [
"PythonFeed",
"ACR",
"Git",
"S3",
"Snowflake",
"AzureSqlDb",
"AzureSynapseAnalytics",
"AzureMySqlDb",
"AzurePostgresDb",
"AzureDataLakeGen2",
"Redis",
"ApiKey",
"AzureOpenAI",
"CognitiveSearch",
"CognitiveService",
"CustomKeys",
"AzureBlob",
"AzureOneLake",
"CosmosDb",
"CosmosDbMongoDbApi",
"AzureDataExplorer",
"AzureMariaDb",
"AzureDatabricksDeltaLake",
"AzureSqlMi",
"AzureTableStorage",
"AmazonRdsForOracle",
"AmazonRdsForSqlServer",
"AmazonRedshift",
"Db2",
"Drill",
"GoogleBigQuery",
"Greenplum",
"Hbase",
"Hive",
"Impala",
"Informix",
"MariaDb",
"MicrosoftAccess",
"MySql",
"Netezza",
"Oracle",
"Phoenix",
"PostgreSql",
"Presto",
"SapOpenHub",
"SapBw",
"SapHana",
"SapTable",
"Spark",
"SqlServer",
"Sybase",
"Teradata",
"Vertica",
"Cassandra",
"Couchbase",
"MongoDbV2",
"MongoDbAtlas",
"AmazonS3Compatible",
"FileServer",
"FtpServer",
"GoogleCloudStorage",
"Hdfs",
"OracleCloudStorage",
"Sftp",
"GenericHttp",
"ODataRest",
"Odbc",
"GenericRest",
"AmazonMws",
"Concur",
"Dynamics",
"DynamicsAx",
"DynamicsCrm",
"GoogleAdWords",
"Hubspot",
"Jira",
"Magento",
"Marketo",
"Office365",
"Eloqua",
"Responsys",
"OracleServiceCloud",
"PayPal",
"QuickBooks",
"Salesforce",
"SalesforceServiceCloud",
"SalesforceMarketingCloud",
"SapCloudForCustomer",
"SapEcc",
"ServiceNow",
"SharePointOnlineList",
"Shopify",
"Square",
"WebTable",
"Xero",
"Zoho",
"GenericContainerRegistry"
],
"type": "string"
},
"ConnectionConfigSpec": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"configValueType": {
"$ref": "#/components/schemas/ConfigValueType"
},
"description": {
"type": "string",
"nullable": true
},
"defaultValue": {
"type": "string",
"nullable": true
},
"enumValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"isOptional": {
"type": "boolean"
}
},
"additionalProperties": false
},
"ConnectionDto": {
"type": "object",
"properties": {
"connectionName": {
"type": "string",
"nullable": true
},
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"configs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"customConfigs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/CustomConnectionConfig"
},
"description": "This is a dictionary",
"nullable": true
},
"expiryTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdDate": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastModifiedDate": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"ConnectionEntity": {
"type": "object",
"properties": {
"connectionId": {
"type": "string",
"nullable": true
},
"connectionName": {
"type": "string",
"nullable": true
},
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"connectionScope": {
"$ref": "#/components/schemas/ConnectionScope"
},
"configs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"customConfigs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/CustomConnectionConfig"
},
"description": "This is a dictionary",
"nullable": true
},
"expiryTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"secretName": {
"type": "string",
"nullable": true
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"ConnectionOverrideSetting": {
"type": "object",
"properties": {
"connectionSourceType": {
"$ref": "#/components/schemas/ConnectionSourceType"
},
"nodeName": {
"type": "string",
"nullable": true
},
"nodeInputName": {
"type": "string",
"nullable": true
},
"nodeDeploymentNameInput": {
"type": "string",
"nullable": true
},
"nodeModelInput": {
"type": "string",
"nullable": true
},
"connectionName": {
"type": "string",
"nullable": true
},
"deploymentName": {
"type": "string",
"nullable": true
},
"model": {
"type": "string",
"nullable": true
},
"connectionTypes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionType"
},
"nullable": true
},
"capabilities": {
"$ref": "#/components/schemas/AzureOpenAIModelCapabilities"
},
"modelEnum": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"ConnectionScope": {
"enum": [
"User",
"WorkspaceShared"
],
"type": "string"
},
"ConnectionSourceType": {
"enum": [
"Node",
"NodeInput"
],
"type": "string"
},
"ConnectionSpec": {
"type": "object",
"properties": {
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"configSpecs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionConfigSpec"
},
"nullable": true
}
},
"additionalProperties": false
},
"ConnectionType": {
"enum": [
"OpenAI",
"AzureOpenAI",
"Serp",
"Bing",
"AzureContentModerator",
"Custom",
"AzureContentSafety",
"CognitiveSearch",
"SubstrateLLM",
"Pinecone",
"Qdrant",
"Weaviate",
"FormRecognizer"
],
"type": "string"
},
"ConsumeMode": {
"enum": [
"Reference",
"Copy",
"CopyAndAutoUpgrade"
],
"type": "string"
},
"ContainerInstanceConfiguration": {
"type": "object",
"properties": {
"region": {
"type": "string",
"nullable": true
},
"cpuCores": {
"type": "number",
"format": "double"
},
"memoryGb": {
"type": "number",
"format": "double"
}
},
"additionalProperties": false
},
"ContainerRegistry": {
"type": "object",
"properties": {
"address": {
"type": "string",
"nullable": true
},
"username": {
"type": "string",
"nullable": true
},
"password": {
"type": "string",
"nullable": true
},
"credentialType": {
"type": "string",
"nullable": true
},
"registryIdentity": {
"$ref": "#/components/schemas/RegistryIdentity"
}
},
"additionalProperties": false
},
"ContainerResourceRequirements": {
"type": "object",
"properties": {
"cpu": {
"type": "number",
"format": "double",
"nullable": true
},
"cpuLimit": {
"type": "number",
"format": "double",
"nullable": true
},
"memoryInGB": {
"type": "number",
"format": "double",
"nullable": true
},
"memoryInGBLimit": {
"type": "number",
"format": "double",
"nullable": true
},
"gpuEnabled": {
"type": "boolean",
"nullable": true
},
"gpu": {
"type": "integer",
"format": "int32",
"nullable": true
},
"fpga": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"ControlFlowType": {
"enum": [
"None",
"DoWhile",
"ParallelFor"
],
"type": "string"
},
"ControlInput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"defaultValue": {
"$ref": "#/components/schemas/ControlInputValue"
}
},
"additionalProperties": false
},
"ControlInputValue": {
"enum": [
"None",
"False",
"True",
"Skipped"
],
"type": "string"
},
"ControlOutput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ControlType": {
"enum": [
"IfElse"
],
"type": "string"
},
"CopyDataTask": {
"type": "object",
"properties": {
"DataCopyMode": {
"$ref": "#/components/schemas/DataCopyMode"
}
},
"additionalProperties": false
},
"CreateFlowRequest": {
"type": "object",
"properties": {
"flowName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"details": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"flowRunSettings": {
"$ref": "#/components/schemas/FlowRunSettings"
},
"isArchived": {
"type": "boolean"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CreateFlowRuntimeRequest": {
"type": "object",
"properties": {
"runtimeType": {
"$ref": "#/components/schemas/RuntimeType"
},
"identity": {
"$ref": "#/components/schemas/ManagedServiceIdentity"
},
"instanceType": {
"type": "string",
"nullable": true
},
"fromExistingEndpoint": {
"type": "boolean"
},
"fromExistingDeployment": {
"type": "boolean"
},
"endpointName": {
"type": "string",
"nullable": true
},
"deploymentName": {
"type": "string",
"nullable": true
},
"computeInstanceName": {
"type": "string",
"nullable": true
},
"fromExistingCustomApp": {
"type": "boolean"
},
"customAppName": {
"type": "string",
"nullable": true
},
"runtimeDescription": {
"type": "string",
"nullable": true
},
"environment": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"CreateFlowSessionRequest": {
"type": "object",
"properties": {
"pythonPipRequirements": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"baseImage": {
"type": "string",
"nullable": true
},
"action": {
"$ref": "#/components/schemas/SetupFlowSessionAction"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CreateInferencePipelineRequest": {
"type": "object",
"properties": {
"moduleNodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"trainingPipelineDraftName": {
"type": "string",
"nullable": true
},
"trainingPipelineRunDisplayName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"graphComponentsMode": {
"$ref": "#/components/schemas/GraphComponentsMode"
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"flattenedSubGraphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineSubDraft"
},
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"CreateOrUpdateConnectionRequest": {
"type": "object",
"properties": {
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"connectionScope": {
"$ref": "#/components/schemas/ConnectionScope"
},
"configs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"customConfigs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/CustomConnectionConfig"
},
"description": "This is a dictionary",
"nullable": true
},
"expiryTime": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"CreateOrUpdateConnectionRequestDto": {
"type": "object",
"properties": {
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"configs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"customConfigs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/CustomConnectionConfig"
},
"description": "This is a dictionary",
"nullable": true
},
"expiryTime": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"CreatePipelineDraftRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"graphComponentsMode": {
"$ref": "#/components/schemas/GraphComponentsMode"
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"flattenedSubGraphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineSubDraft"
},
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"CreatePipelineJobScheduleDto": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"pipelineJobName": {
"type": "string",
"nullable": true
},
"pipelineJobRuntimeSettings": {
"$ref": "#/components/schemas/PipelineJobRuntimeBasicSettings"
},
"displayName": {
"type": "string",
"nullable": true
},
"triggerType": {
"$ref": "#/components/schemas/TriggerType"
},
"recurrence": {
"$ref": "#/components/schemas/Recurrence"
},
"cron": {
"$ref": "#/components/schemas/Cron"
},
"status": {
"$ref": "#/components/schemas/ScheduleStatus"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"CreatePublishedPipelineRequest": {
"type": "object",
"properties": {
"usePipelineEndpoint": {
"type": "boolean"
},
"pipelineName": {
"type": "string",
"nullable": true
},
"pipelineDescription": {
"type": "string",
"nullable": true
},
"useExistingPipelineEndpoint": {
"type": "boolean"
},
"pipelineEndpointName": {
"type": "string",
"nullable": true
},
"pipelineEndpointDescription": {
"type": "string",
"nullable": true
},
"setAsDefaultPipelineForEndpoint": {
"type": "boolean"
},
"stepTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"enableNotification": {
"type": "boolean",
"nullable": true
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"displayName": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"CreateRealTimeEndpointRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"computeInfo": {
"$ref": "#/components/schemas/ComputeInfo"
},
"description": {
"type": "string",
"nullable": true
},
"linkedPipelineDraftId": {
"type": "string",
"nullable": true
},
"linkedPipelineRunId": {
"type": "string",
"nullable": true
},
"aksAdvanceSettings": {
"$ref": "#/components/schemas/AKSAdvanceSettings"
},
"aciAdvanceSettings": {
"$ref": "#/components/schemas/ACIAdvanceSettings"
},
"linkedTrainingPipelineRunId": {
"type": "string",
"nullable": true
},
"linkedExperimentName": {
"type": "string",
"nullable": true
},
"graphNodesRunIdMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"workflow": {
"$ref": "#/components/schemas/PipelineGraph"
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InputOutputPortMetadata"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InputOutputPortMetadata"
},
"nullable": true
},
"exampleRequest": {
"$ref": "#/components/schemas/ExampleRequest"
},
"userStorageConnectionString": {
"type": "string",
"nullable": true
},
"userStorageEndpointUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"userStorageWorkspaceSaiToken": {
"type": "string",
"nullable": true
},
"userStorageContainerName": {
"type": "string",
"nullable": true
},
"pipelineRunId": {
"type": "string",
"nullable": true
},
"rootPipelineRunId": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CreatedBy": {
"type": "object",
"properties": {
"userObjectId": {
"type": "string",
"nullable": true
},
"userTenantId": {
"type": "string",
"nullable": true
},
"userName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CreatedFromDto": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/CreatedFromType"
},
"locationType": {
"$ref": "#/components/schemas/CreatedFromLocationType"
},
"location": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CreatedFromLocationType": {
"enum": [
"ArtifactId"
],
"type": "string"
},
"CreatedFromType": {
"enum": [
"Notebook"
],
"type": "string"
},
"CreationContext": {
"type": "object",
"properties": {
"createdTime": {
"type": "string",
"format": "date-time"
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"creationSource": {
"type": "string",
"nullable": true
}
}
},
"Cron": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"nullable": true
},
"endTime": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"nullable": true
},
"timeZone": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CustomConnectionConfig": {
"type": "object",
"properties": {
"configValueType": {
"$ref": "#/components/schemas/ConfigValueType"
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"CustomReference": {
"type": "object",
"properties": {
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DBFSReference": {
"type": "object",
"properties": {
"relativePath": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Data": {
"type": "object",
"properties": {
"dataLocation": {
"$ref": "#/components/schemas/ExecutionDataLocation"
},
"mechanism": {
"$ref": "#/components/schemas/DeliveryMechanism"
},
"environmentVariableName": {
"type": "string",
"nullable": true
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"DataBindingMode": {
"enum": [
"Mount",
"Download",
"Upload",
"ReadOnlyMount",
"ReadWriteMount",
"Direct",
"EvalMount",
"EvalDownload"
],
"type": "string"
},
"DataCategory": {
"enum": [
"All",
"Dataset",
"Model"
],
"type": "string"
},
"DataCopyMode": {
"enum": [
"MergeWithOverwrite",
"FailIfConflict"
],
"type": "string"
},
"DataInfo": {
"type": "object",
"properties": {
"feedName": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"dataSourceType": {
"$ref": "#/components/schemas/DataSourceType"
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"dataTypeId": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time",
"nullable": true
},
"modifiedDate": {
"type": "string",
"format": "date-time",
"nullable": true
},
"registeredBy": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"createdByStudio": {
"type": "boolean",
"nullable": true
},
"dataReferenceType": {
"$ref": "#/components/schemas/DataReferenceType"
},
"datasetType": {
"type": "string",
"nullable": true
},
"savedDatasetId": {
"type": "string",
"nullable": true
},
"datasetVersionId": {
"type": "string",
"nullable": true
},
"isVisible": {
"type": "boolean"
},
"isRegistered": {
"type": "boolean"
},
"properties": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"connectionString": {
"type": "string",
"nullable": true
},
"containerName": {
"type": "string",
"nullable": true
},
"dataStorageEndpointUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"workspaceSaiToken": {
"type": "string",
"nullable": true
},
"amlDatasetDataFlow": {
"type": "string",
"nullable": true
},
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"armId": {
"type": "string",
"nullable": true
},
"assetId": {
"type": "string",
"nullable": true
},
"assetUri": {
"type": "string",
"nullable": true
},
"assetType": {
"type": "string",
"nullable": true
},
"isDataV2": {
"type": "boolean",
"nullable": true
},
"assetScopeType": {
"$ref": "#/components/schemas/AssetScopeTypes"
},
"pipelineRunId": {
"type": "string",
"nullable": true
},
"moduleNodeId": {
"type": "string",
"nullable": true
},
"outputPortName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DataLocation": {
"type": "object",
"properties": {
"storageType": {
"$ref": "#/components/schemas/DataLocationStorageType"
},
"storageId": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataReference": {
"$ref": "#/components/schemas/DataReference"
},
"amlDataset": {
"$ref": "#/components/schemas/AmlDataset"
},
"assetDefinition": {
"$ref": "#/components/schemas/AssetDefinition"
}
},
"additionalProperties": false
},
"DataLocationStorageType": {
"enum": [
"None",
"AzureBlob",
"Artifact",
"Snapshot",
"SavedAmlDataset",
"Asset"
],
"type": "string"
},
"DataPath": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"sqlDataPath": {
"$ref": "#/components/schemas/SqlDataPath"
}
},
"additionalProperties": false
},
"DataPathParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"defaultValue": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"isOptional": {
"type": "boolean"
},
"dataTypeId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DataPortDto": {
"type": "object",
"properties": {
"dataPortType": {
"$ref": "#/components/schemas/DataPortType"
},
"dataPortName": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataStoreIntellectualPropertyAccessMode": {
"$ref": "#/components/schemas/IntellectualPropertyAccessMode"
},
"dataStoreIntellectualPropertyPublisher": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DataPortType": {
"enum": [
"Input",
"Output"
],
"type": "string"
},
"DataReference": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/DataReferenceType"
},
"azureBlobReference": {
"$ref": "#/components/schemas/AzureBlobReference"
},
"azureDataLakeReference": {
"$ref": "#/components/schemas/AzureDataLakeReference"
},
"azureFilesReference": {
"$ref": "#/components/schemas/AzureFilesReference"
},
"azureSqlDatabaseReference": {
"$ref": "#/components/schemas/AzureDatabaseReference"
},
"azurePostgresDatabaseReference": {
"$ref": "#/components/schemas/AzureDatabaseReference"
},
"azureDataLakeGen2Reference": {
"$ref": "#/components/schemas/AzureDataLakeGen2Reference"
},
"dbfsReference": {
"$ref": "#/components/schemas/DBFSReference"
},
"azureMySqlDatabaseReference": {
"$ref": "#/components/schemas/AzureDatabaseReference"
},
"customReference": {
"$ref": "#/components/schemas/CustomReference"
},
"hdfsReference": {
"$ref": "#/components/schemas/HdfsReference"
}
},
"additionalProperties": false
},
"DataReferenceConfiguration": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"mode": {
"$ref": "#/components/schemas/DataStoreMode"
},
"pathOnDataStore": {
"type": "string",
"nullable": true
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
}
},
"additionalProperties": false
},
"DataReferenceType": {
"enum": [
"None",
"AzureBlob",
"AzureDataLake",
"AzureFiles",
"AzureSqlDatabase",
"AzurePostgresDatabase",
"AzureDataLakeGen2",
"DBFS",
"AzureMySqlDatabase",
"Custom",
"Hdfs"
],
"type": "string"
},
"DataSetDefinition": {
"type": "object",
"properties": {
"dataTypeShortName": {
"type": "string",
"nullable": true
},
"parameterName": {
"type": "string",
"nullable": true
},
"value": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
}
},
"additionalProperties": false
},
"DataSetDefinitionValue": {
"type": "object",
"properties": {
"literalValue": {
"$ref": "#/components/schemas/DataPath"
},
"dataSetReference": {
"$ref": "#/components/schemas/RegisteredDataSetReference"
},
"savedDataSetReference": {
"$ref": "#/components/schemas/SavedDataSetReference"
},
"assetDefinition": {
"$ref": "#/components/schemas/AssetDefinition"
}
},
"additionalProperties": false
},
"DataSetPathParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"defaultValue": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"isOptional": {
"type": "boolean"
}
},
"additionalProperties": false
},
"DataSettings": {
"type": "object",
"properties": {
"targetColumnName": {
"type": "string",
"nullable": true
},
"weightColumnName": {
"type": "string",
"nullable": true
},
"positiveLabel": {
"type": "string",
"nullable": true
},
"validationData": {
"$ref": "#/components/schemas/ValidationDataSettings"
},
"testData": {
"$ref": "#/components/schemas/TestDataSettings"
}
},
"additionalProperties": false
},
"DataSourceType": {
"enum": [
"None",
"PipelineDataSource",
"AmlDataset",
"GlobalDataset",
"FeedModel",
"FeedDataset",
"AmlDataVersion",
"AMLModelVersion"
],
"type": "string"
},
"DataStoreMode": {
"enum": [
"Mount",
"Download",
"Upload"
],
"type": "string"
},
"DataTransferCloudConfiguration": {
"type": "object",
"properties": {
"AllowOverwrite": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"DataTransferSink": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/DataTransferStorageType"
},
"fileSystem": {
"$ref": "#/components/schemas/FileSystem"
},
"databaseSink": {
"$ref": "#/components/schemas/DatabaseSink"
}
},
"additionalProperties": false
},
"DataTransferSource": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/DataTransferStorageType"
},
"fileSystem": {
"$ref": "#/components/schemas/FileSystem"
},
"databaseSource": {
"$ref": "#/components/schemas/DatabaseSource"
}
},
"additionalProperties": false
},
"DataTransferStorageType": {
"enum": [
"DataBase",
"FileSystem"
],
"type": "string"
},
"DataTransferTaskType": {
"enum": [
"ImportData",
"ExportData",
"CopyData"
],
"type": "string"
},
"DataTransferV2CloudSetting": {
"type": "object",
"properties": {
"taskType": {
"$ref": "#/components/schemas/DataTransferTaskType"
},
"ComputeName": {
"type": "string",
"nullable": true
},
"CopyDataTask": {
"$ref": "#/components/schemas/CopyDataTask"
},
"ImportDataTask": {
"$ref": "#/components/schemas/ImportDataTask"
},
"ExportDataTask": {
"$ref": "#/components/schemas/ExportDataTask"
},
"DataTransferSources": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataTransferSource"
},
"description": "This is a dictionary",
"nullable": true
},
"DataTransferSinks": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataTransferSink"
},
"description": "This is a dictionary",
"nullable": true
},
"DataCopyMode": {
"$ref": "#/components/schemas/DataCopyMode"
}
},
"additionalProperties": false
},
"DataTypeCreationInfo": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"isDirectory": {
"type": "boolean"
},
"fileExtension": {
"type": "string",
"nullable": true
},
"parentDataTypeIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"DataTypeMechanism": {
"enum": [
"ErrorWhenNotExisting",
"RegisterWhenNotExisting",
"RegisterBuildinDataTypeOnly"
],
"type": "string"
},
"DatabaseSink": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"table": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatabaseSource": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"query": {
"type": "string",
"nullable": true
},
"storedProcedureName": {
"type": "string",
"nullable": true
},
"storedProcedureParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StoredProcedureParameter"
},
"nullable": true
}
},
"additionalProperties": false
},
"DatabricksComputeInfo": {
"type": "object",
"properties": {
"existingClusterId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatabricksConfiguration": {
"type": "object",
"properties": {
"workers": {
"type": "integer",
"format": "int32"
},
"minimumWorkerCount": {
"type": "integer",
"format": "int32"
},
"maxMumWorkerCount": {
"type": "integer",
"format": "int32"
},
"sparkVersion": {
"type": "string",
"nullable": true
},
"nodeTypeId": {
"type": "string",
"nullable": true
},
"sparkConf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"sparkEnvVars": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"clusterLogConfDbfsPath": {
"type": "string",
"nullable": true
},
"dbfsInitScripts": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InitScriptInfoDto"
},
"nullable": true
},
"instancePoolId": {
"type": "string",
"nullable": true
},
"timeoutSeconds": {
"type": "integer",
"format": "int32"
},
"notebookTask": {
"$ref": "#/components/schemas/NoteBookTaskDto"
},
"sparkPythonTask": {
"$ref": "#/components/schemas/SparkPythonTaskDto"
},
"sparkJarTask": {
"$ref": "#/components/schemas/SparkJarTaskDto"
},
"sparkSubmitTask": {
"$ref": "#/components/schemas/SparkSubmitTaskDto"
},
"jarLibraries": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"eggLibraries": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"whlLibraries": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pypiLibraries": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PythonPyPiOrRCranLibraryDto"
},
"nullable": true
},
"rCranLibraries": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PythonPyPiOrRCranLibraryDto"
},
"nullable": true
},
"mavenLibraries": {
"type": "array",
"items": {
"$ref": "#/components/schemas/MavenLibraryDto"
},
"nullable": true
},
"libraries": {
"type": "array",
"items": { },
"nullable": true
},
"linkedADBWorkspaceMetadata": {
"$ref": "#/components/schemas/LinkedADBWorkspaceMetadata"
},
"databrickResourceId": {
"type": "string",
"nullable": true
},
"autoScale": {
"type": "boolean"
}
},
"additionalProperties": false
},
"DatacacheConfiguration": {
"type": "object",
"properties": {
"datacacheId": {
"type": "string",
"format": "uuid"
},
"datacacheStore": {
"type": "string",
"nullable": true
},
"datasetId": {
"type": "string",
"format": "uuid"
},
"mode": {
"$ref": "#/components/schemas/DatacacheMode"
},
"replica": {
"type": "integer",
"format": "int32",
"nullable": true
},
"failureFallback": {
"type": "boolean"
},
"pathOnCompute": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatacacheMode": {
"enum": [
"Mount"
],
"type": "string"
},
"DatasetAccessModes": {
"enum": [
"Default",
"DatasetInDpv2",
"AssetInDpv2",
"DatasetInDesignerUI",
"AssetInDesignerUI",
"DatasetInDpv2WithDatasetInDesignerUI",
"AssetInDpv2WithDatasetInDesignerUI",
"AssetInDpv2WithAssetInDesignerUI",
"DatasetAndAssetInDpv2WithDatasetInDesignerUI",
"Dataset",
"Asset"
],
"type": "string"
},
"DatasetConsumptionType": {
"enum": [
"RunInput",
"Reference"
],
"type": "string"
},
"DatasetDeliveryMechanism": {
"enum": [
"Direct",
"Mount",
"Download",
"Hdfs"
],
"type": "string"
},
"DatasetIdentifier": {
"type": "object",
"properties": {
"savedId": {
"type": "string",
"nullable": true
},
"registeredId": {
"type": "string",
"nullable": true
},
"registeredVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatasetInputDetails": {
"type": "object",
"properties": {
"inputName": {
"type": "string",
"nullable": true
},
"mechanism": {
"$ref": "#/components/schemas/DatasetDeliveryMechanism"
},
"pathOnCompute": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatasetLineage": {
"type": "object",
"properties": {
"identifier": {
"$ref": "#/components/schemas/DatasetIdentifier"
},
"consumptionType": {
"$ref": "#/components/schemas/DatasetConsumptionType"
},
"inputDetails": {
"$ref": "#/components/schemas/DatasetInputDetails"
}
},
"additionalProperties": false
},
"DatasetOutput": {
"type": "object",
"properties": {
"datasetType": {
"$ref": "#/components/schemas/DatasetType"
},
"datasetRegistration": {
"$ref": "#/components/schemas/DatasetRegistration"
},
"datasetOutputOptions": {
"$ref": "#/components/schemas/DatasetOutputOptions"
}
},
"additionalProperties": false
},
"DatasetOutputDetails": {
"type": "object",
"properties": {
"outputName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatasetOutputOptions": {
"type": "object",
"properties": {
"sourceGlobs": {
"$ref": "#/components/schemas/GlobsOptions"
},
"pathOnDatastore": {
"type": "string",
"nullable": true
},
"PathOnDatastoreParameterAssignment": {
"$ref": "#/components/schemas/ParameterAssignment"
}
},
"additionalProperties": false
},
"DatasetOutputType": {
"enum": [
"RunOutput",
"Reference"
],
"type": "string"
},
"DatasetRegistration": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"createNewVersion": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatasetRegistrationOptions": {
"type": "object",
"properties": {
"additionalTransformation": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DatasetType": {
"enum": [
"File",
"Tabular"
],
"type": "string"
},
"DatastoreSetting": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DbfsStorageInfoDto": {
"type": "object",
"properties": {
"destination": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DebugInfoResponse": {
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "The type.",
"nullable": true
},
"message": {
"type": "string",
"description": "The message.",
"nullable": true
},
"stackTrace": {
"type": "string",
"description": "The stack trace.",
"nullable": true
},
"innerException": {
"$ref": "#/components/schemas/DebugInfoResponse"
},
"data": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"errorResponse": {
"$ref": "#/components/schemas/ErrorResponse"
}
},
"additionalProperties": false,
"description": "Internal debugging information not intended for external clients."
},
"DeliveryMechanism": {
"enum": [
"Direct",
"Mount",
"Download",
"Hdfs"
],
"type": "string"
},
"DeployFlowRequest": {
"type": "object",
"properties": {
"sourceResourceId": {
"type": "string",
"nullable": true
},
"sourceFlowRunId": {
"type": "string",
"nullable": true
},
"sourceFlowId": {
"type": "string",
"nullable": true
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"flowSubmitRunSettings": {
"$ref": "#/components/schemas/FlowSubmitRunSettings"
},
"outputNamesIncludedInEndpointResponse": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"endpointName": {
"type": "string",
"nullable": true
},
"endpointDescription": {
"type": "string",
"nullable": true
},
"authMode": {
"$ref": "#/components/schemas/EndpointAuthMode"
},
"identity": {
"$ref": "#/components/schemas/ManagedServiceIdentity"
},
"endpointTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enablePublicNetworkAccess": {
"type": "boolean",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"useWorkspaceConnection": {
"type": "boolean"
},
"deploymentName": {
"type": "string",
"nullable": true
},
"environment": {
"type": "string",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"deploymentTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"appInsightsEnabled": {
"type": "boolean"
},
"enableModelDataCollector": {
"type": "boolean"
},
"skipUpdateTrafficToFull": {
"type": "boolean"
},
"enableStreamingResponse": {
"type": "boolean"
},
"instanceType": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32"
},
"autoGrantConnectionPermission": {
"type": "boolean"
}
},
"additionalProperties": false
},
"DeploymentInfo": {
"type": "object",
"properties": {
"operationId": {
"type": "string",
"nullable": true
},
"serviceId": {
"type": "string",
"nullable": true
},
"serviceName": {
"type": "string",
"nullable": true
},
"statusDetail": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DistributionConfiguration": {
"type": "object",
"properties": {
"distributionType": {
"$ref": "#/components/schemas/DistributionType"
}
},
"additionalProperties": false
},
"DistributionParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"inputType": {
"$ref": "#/components/schemas/DistributionParameterEnum"
}
},
"additionalProperties": false
},
"DistributionParameterEnum": {
"enum": [
"Text",
"Number"
],
"type": "string"
},
"DistributionType": {
"enum": [
"PyTorch",
"TensorFlow",
"Mpi",
"Ray"
],
"type": "string"
},
"DoWhileControlFlowInfo": {
"type": "object",
"properties": {
"outputPortNameToInputPortNamesMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"nullable": true
},
"conditionOutputPortName": {
"type": "string",
"nullable": true
},
"runSettings": {
"$ref": "#/components/schemas/DoWhileControlFlowRunSettings"
}
},
"additionalProperties": false
},
"DoWhileControlFlowRunSettings": {
"type": "object",
"properties": {
"maxLoopIterationCount": {
"$ref": "#/components/schemas/ParameterAssignment"
}
},
"additionalProperties": false
},
"DockerBuildContext": {
"type": "object",
"properties": {
"locationType": {
"$ref": "#/components/schemas/BuildContextLocationType"
},
"location": {
"type": "string",
"nullable": true
},
"dockerfilePath": {
"type": "string",
"default": "Dockerfile",
"nullable": true
}
},
"additionalProperties": false
},
"DockerConfiguration": {
"type": "object",
"properties": {
"useDocker": {
"type": "boolean",
"nullable": true
},
"sharedVolumes": {
"type": "boolean",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"DockerImagePlatform": {
"type": "object",
"properties": {
"os": {
"type": "string",
"nullable": true
},
"architecture": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"DockerSection": {
"type": "object",
"properties": {
"baseImage": {
"type": "string",
"nullable": true
},
"platform": {
"$ref": "#/components/schemas/DockerImagePlatform"
},
"baseDockerfile": {
"type": "string",
"nullable": true
},
"buildContext": {
"$ref": "#/components/schemas/DockerBuildContext"
},
"baseImageRegistry": {
"$ref": "#/components/schemas/ContainerRegistry"
}
},
"additionalProperties": false
},
"DockerSettingConfiguration": {
"type": "object",
"properties": {
"useDocker": {
"type": "boolean",
"nullable": true
},
"sharedVolumes": {
"type": "boolean",
"nullable": true
},
"shmSize": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"DownloadResourceInfo": {
"type": "object",
"properties": {
"downloadUrl": {
"type": "string",
"nullable": true
},
"size": {
"type": "integer",
"format": "int64"
}
},
"additionalProperties": false
},
"EPRPipelineRunErrorClassificationRequest": {
"type": "object",
"properties": {
"rootRunId": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"taskResult": {
"type": "string",
"nullable": true
},
"failureType": {
"type": "string",
"nullable": true
},
"failureName": {
"type": "string",
"nullable": true
},
"responsibleTeam": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ETag": {
"type": "object",
"additionalProperties": false
},
"EarlyTerminationPolicyType": {
"enum": [
"Bandit",
"MedianStopping",
"TruncationSelection"
],
"type": "string"
},
"EmailNotificationEnableType": {
"enum": [
"JobCompleted",
"JobFailed",
"JobCancelled"
],
"type": "string"
},
"EndpointAuthMode": {
"enum": [
"AMLToken",
"Key",
"AADToken"
],
"type": "string"
},
"EndpointSetting": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"port": {
"type": "integer",
"format": "int32",
"nullable": true
},
"sslThumbprint": {
"type": "string",
"nullable": true
},
"endpoint": {
"type": "string",
"nullable": true
},
"proxyEndpoint": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"errorMessage": {
"type": "string",
"nullable": true
},
"enabled": {
"type": "boolean",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"nodes": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"EntityInterface": {
"type": "object",
"properties": {
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Parameter"
},
"nullable": true
},
"ports": {
"$ref": "#/components/schemas/NodePortInterface"
},
"metadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Parameter"
},
"nullable": true
},
"dataPathParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataPathParameter"
},
"nullable": true
},
"dataPathParameterList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataSetPathParameter"
},
"nullable": true
},
"AssetOutputSettingsParameterList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AssetOutputSettingsParameter"
},
"nullable": true
}
},
"additionalProperties": false
},
"EntityKind": {
"enum": [
"Invalid",
"LineageRoot",
"Versioned",
"Unversioned"
],
"type": "string"
},
"EntityStatus": {
"enum": [
"Active",
"Deprecated",
"Disabled"
],
"type": "string"
},
"EntrySetting": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"className": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"EnumParameterRule": {
"type": "object",
"properties": {
"validValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"EnvironmentConfiguration": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"useEnvironmentDefinition": {
"type": "boolean"
},
"environmentDefinitionString": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"EnvironmentDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"assetId": {
"type": "string",
"nullable": true
},
"autoRebuild": {
"type": "boolean",
"nullable": true
},
"python": {
"$ref": "#/components/schemas/PythonSection"
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"docker": {
"$ref": "#/components/schemas/DockerSection"
},
"spark": {
"$ref": "#/components/schemas/SparkSection"
},
"r": {
"$ref": "#/components/schemas/RSection"
},
"inferencingStackVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"EnvironmentDefinitionDto": {
"type": "object",
"properties": {
"environmentName": {
"type": "string",
"nullable": true
},
"environmentVersion": {
"type": "string",
"nullable": true
},
"intellectualPropertyPublisher": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ErrorAdditionalInfo": {
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "The additional info type.",
"nullable": true
},
"info": {
"description": "The additional info.",
"nullable": true
}
},
"additionalProperties": false,
"description": "The resource management error additional info."
},
"ErrorHandlingMode": {
"enum": [
"DefaultInterpolation",
"CustomerFacingInterpolation"
],
"type": "string"
},
"ErrorResponse": {
"type": "object",
"properties": {
"error": {
"$ref": "#/components/schemas/RootError"
},
"correlation": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"description": "Dictionary containing correlation details for the error.",
"nullable": true
},
"environment": {
"type": "string",
"description": "The hosting environment.",
"nullable": true
},
"location": {
"type": "string",
"description": "The Azure region.",
"nullable": true
},
"time": {
"type": "string",
"description": "The time in UTC.",
"format": "date-time"
},
"componentName": {
"type": "string",
"description": "Component name where error originated/encountered.",
"nullable": true
}
},
"description": "The error response."
},
"EsCloudConfiguration": {
"type": "object",
"properties": {
"enableOutputToFileBasedOnDataTypeId": {
"type": "boolean",
"nullable": true
},
"environment": {
"$ref": "#/components/schemas/EnvironmentConfiguration"
},
"hyperDriveConfiguration": {
"$ref": "#/components/schemas/HyperDriveConfiguration"
},
"k8sConfig": {
"$ref": "#/components/schemas/K8sConfiguration"
},
"resourceConfig": {
"$ref": "#/components/schemas/AEVAResourceConfiguration"
},
"torchDistributedConfig": {
"$ref": "#/components/schemas/TorchDistributedConfiguration"
},
"targetSelectorConfig": {
"$ref": "#/components/schemas/TargetSelectorConfiguration"
},
"dockerConfig": {
"$ref": "#/components/schemas/DockerSettingConfiguration"
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"maxRunDurationSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/IdentitySetting"
},
"applicationEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ApplicationEndpointConfiguration"
},
"nullable": true
},
"runConfig": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"EvaluationFlowRunSettings": {
"type": "object",
"properties": {
"flowRunId": {
"type": "string",
"nullable": true
},
"variantRunVariants": {
"type": "array",
"items": {
"$ref": "#/components/schemas/VariantIdentifier"
},
"nullable": true
},
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"inputUniversalLink": {
"type": "string",
"nullable": true
},
"dataInputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"flowRunOutputDirectory": {
"type": "string",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"ExampleRequest": {
"type": "object",
"properties": {
"inputs": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "array",
"items": { }
}
},
"description": "This is a dictionary",
"nullable": true
},
"globalParameters": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ExecutionContextDto": {
"type": "object",
"properties": {
"executable": {
"type": "string",
"nullable": true
},
"userCode": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ExecutionDataLocation": {
"type": "object",
"properties": {
"dataset": {
"$ref": "#/components/schemas/RunDatasetReference"
},
"dataPath": {
"$ref": "#/components/schemas/ExecutionDataPath"
},
"uri": {
"$ref": "#/components/schemas/UriReference"
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ExecutionDataPath": {
"type": "object",
"properties": {
"datastoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ExecutionGlobsOptions": {
"type": "object",
"properties": {
"globPatterns": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"ExecutionPhase": {
"enum": [
"Execution",
"Initialization",
"Finalization"
],
"type": "string"
},
"ExperimentComputeMetaInfo": {
"type": "object",
"properties": {
"currentNodeCount": {
"type": "integer",
"format": "int32"
},
"targetNodeCount": {
"type": "integer",
"format": "int32"
},
"maxNodeCount": {
"type": "integer",
"format": "int32"
},
"minNodeCount": {
"type": "integer",
"format": "int32"
},
"idleNodeCount": {
"type": "integer",
"format": "int32"
},
"runningNodeCount": {
"type": "integer",
"format": "int32"
},
"preparingNodeCount": {
"type": "integer",
"format": "int32"
},
"unusableNodeCount": {
"type": "integer",
"format": "int32"
},
"leavingNodeCount": {
"type": "integer",
"format": "int32"
},
"preemptedNodeCount": {
"type": "integer",
"format": "int32"
},
"vmSize": {
"type": "string",
"nullable": true
},
"location": {
"type": "string",
"nullable": true
},
"provisioningState": {
"type": "string",
"nullable": true
},
"state": {
"type": "string",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"createdByStudio": {
"type": "boolean"
},
"isGpuType": {
"type": "boolean"
},
"resourceId": {
"type": "string",
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ExperimentInfo": {
"type": "object",
"properties": {
"experimentName": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ExportComponentMetaInfo": {
"type": "object",
"properties": {
"moduleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"moduleVersion": {
"type": "string",
"nullable": true
},
"isAnonymous": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"ExportDataTask": {
"type": "object",
"properties": {
"DataTransferSink": {
"$ref": "#/components/schemas/DataTransferSink"
}
},
"additionalProperties": false
},
"ExtensibleObject": {
"type": "object"
},
"FeaturizationMode": {
"enum": [
"Auto",
"Custom",
"Off"
],
"type": "string"
},
"FeaturizationSettings": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/FeaturizationMode"
},
"blockedTransformers": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"columnPurposes": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"dropColumns": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"transformerParams": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ColumnTransformer"
},
"nullable": true
},
"nullable": true
},
"datasetLanguage": {
"type": "string",
"nullable": true
},
"enableDnnFeaturization": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"FeedDto": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"sharingScopes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SharingScope"
},
"nullable": true
},
"supportedAssetTypes": {
"type": "object",
"properties": {
"Component": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"Model": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"Environment": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"Dataset": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"DataStore": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"SampleGraph": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"FlowTool": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"FlowToolSetting": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"FlowConnection": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"FlowSample": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
},
"FlowRuntimeSpec": {
"$ref": "#/components/schemas/AssetTypeMetaInfo"
}
},
"additionalProperties": false,
"nullable": true
},
"regionalWorkspaceStorage": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
},
"description": "This is a dictionary",
"nullable": true
},
"intellectualPropertyPublisher": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FileSystem": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"nullable": true
},
"path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Flow": {
"type": "object",
"properties": {
"sourceResourceId": {
"type": "string",
"nullable": true
},
"flowGraph": {
"$ref": "#/components/schemas/FlowGraph"
},
"nodeVariants": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/NodeVariant"
},
"description": "This is a dictionary",
"nullable": true
},
"flowGraphLayout": {
"$ref": "#/components/schemas/FlowGraphLayout"
},
"bulkTestData": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"evaluationFlows": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowGraphReference"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"FlowAnnotations": {
"type": "object",
"properties": {
"flowName": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"isArchived": {
"type": "boolean"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"archived": {
"type": "boolean"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
}
},
"FlowBaseDto": {
"type": "object",
"properties": {
"flowId": {
"type": "string",
"nullable": true
},
"flowName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"experimentId": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"flowResourceId": {
"type": "string",
"nullable": true
},
"isArchived": {
"type": "boolean"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowDto": {
"type": "object",
"properties": {
"timestamp": {
"type": "string",
"format": "date-time",
"nullable": true
},
"eTag": {
"$ref": "#/components/schemas/ETag"
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowRunSettings": {
"$ref": "#/components/schemas/FlowRunSettings"
},
"flowRunResult": {
"$ref": "#/components/schemas/FlowRunResult"
},
"flowTestMode": {
"$ref": "#/components/schemas/FlowTestMode"
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"studioPortalEndpoint": {
"type": "string",
"nullable": true
},
"flowId": {
"type": "string",
"nullable": true
},
"flowName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"experimentId": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"flowResourceId": {
"type": "string",
"nullable": true
},
"isArchived": {
"type": "boolean"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowEnvironment": {
"type": "object",
"properties": {
"image": {
"type": "string",
"nullable": true
},
"python_requirements_txt": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowFeature": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"state": {
"type": "object",
"properties": {
"Runtime": {
"$ref": "#/components/schemas/FlowFeatureStateEnum"
},
"Executor": {
"$ref": "#/components/schemas/FlowFeatureStateEnum"
},
"PFS": {
"$ref": "#/components/schemas/FlowFeatureStateEnum"
}
},
"additionalProperties": false,
"nullable": true
}
},
"additionalProperties": false
},
"FlowFeatureStateEnum": {
"enum": [
"Ready",
"E2ETest"
],
"type": "string"
},
"FlowGraph": {
"type": "object",
"properties": {
"nodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Node"
},
"nullable": true
},
"tools": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Tool"
},
"nullable": true
},
"codes": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowInputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowOutputDefinition"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"FlowGraphAnnotationNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"content": {
"type": "string",
"nullable": true
},
"mentionedNodeNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"structuredContent": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowGraphLayout": {
"type": "object",
"properties": {
"nodeLayouts": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowNodeLayout"
},
"description": "This is a dictionary",
"nullable": true
},
"extendedData": {
"type": "string",
"nullable": true
},
"annotationNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowGraphAnnotationNode"
},
"nullable": true
},
"orientation": {
"$ref": "#/components/schemas/Orientation"
}
},
"additionalProperties": false
},
"FlowGraphReference": {
"type": "object",
"properties": {
"flowGraph": {
"$ref": "#/components/schemas/FlowGraph"
},
"referenceResourceId": {
"type": "string",
"nullable": true
},
"variant": {
"$ref": "#/components/schemas/VariantIdentifier"
}
},
"additionalProperties": false
},
"FlowIndexEntity": {
"type": "object",
"properties": {
"schemaId": {
"type": "string",
"nullable": true
},
"entityId": {
"type": "string",
"nullable": true
},
"kind": {
"$ref": "#/components/schemas/EntityKind"
},
"annotations": {
"$ref": "#/components/schemas/FlowAnnotations"
},
"properties": {
"$ref": "#/components/schemas/FlowProperties"
},
"internal": {
"$ref": "#/components/schemas/ExtensibleObject"
},
"updateSequence": {
"type": "integer",
"format": "int64"
},
"type": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true,
"readOnly": true
},
"entityContainerId": {
"type": "string",
"nullable": true,
"readOnly": true
},
"entityObjectId": {
"type": "string",
"nullable": true,
"readOnly": true
},
"resourceType": {
"type": "string",
"nullable": true,
"readOnly": true
},
"relationships": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Relationship"
},
"nullable": true
},
"assetId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowInputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ValueType"
},
"default": {
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"is_chat_input": {
"type": "boolean"
},
"is_chat_history": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"FlowLanguage": {
"enum": [
"Python",
"CSharp",
"TypeScript",
"JavaScript"
],
"type": "string"
},
"FlowNode": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ToolType"
},
"source": {
"$ref": "#/components/schemas/NodeSource"
},
"inputs": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"activate": {
"$ref": "#/components/schemas/Activate"
},
"use_variants": {
"type": "boolean"
},
"comment": {
"type": "string",
"nullable": true
},
"api": {
"type": "string",
"nullable": true
},
"provider": {
"type": "string",
"nullable": true
},
"connection": {
"type": "string",
"nullable": true
},
"module": {
"type": "string",
"nullable": true
},
"aggregation": {
"type": "boolean"
}
},
"additionalProperties": false
},
"FlowNodeLayout": {
"type": "object",
"properties": {
"x": {
"type": "number",
"format": "float"
},
"y": {
"type": "number",
"format": "float"
},
"width": {
"type": "number",
"format": "float"
},
"height": {
"type": "number",
"format": "float"
},
"index": {
"type": "integer",
"format": "int32"
},
"extendedData": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowNodeVariant": {
"type": "object",
"properties": {
"default_variant_id": {
"type": "string",
"nullable": true
},
"variants": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowVariantNode"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"FlowOutputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ValueType"
},
"description": {
"type": "string",
"nullable": true
},
"reference": {
"type": "string",
"nullable": true
},
"evaluation_only": {
"type": "boolean"
},
"is_chat_output": {
"type": "boolean"
}
},
"additionalProperties": false
},
"FlowPatchOperationType": {
"enum": [
"ArchiveFlow",
"RestoreFlow",
"ExportFlowToFile"
],
"type": "string"
},
"FlowProperties": {
"type": "object",
"properties": {
"flowId": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"creationContext": {
"$ref": "#/components/schemas/CreationContext"
}
}
},
"FlowRunBasePath": {
"type": "object",
"properties": {
"outputDatastoreName": {
"type": "string",
"nullable": true
},
"basePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowRunInfo": {
"type": "object",
"properties": {
"flowGraph": {
"$ref": "#/components/schemas/FlowGraph"
},
"flowGraphLayout": {
"$ref": "#/components/schemas/FlowGraphLayout"
},
"flowName": {
"type": "string",
"nullable": true
},
"flowRunResourceId": {
"type": "string",
"nullable": true
},
"flowRunId": {
"type": "string",
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"batchInputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"flowRunType": {
"$ref": "#/components/schemas/FlowRunTypeEnum"
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"runtimeName": {
"type": "string",
"nullable": true
},
"bulkTestId": {
"type": "string",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdOn": {
"type": "string",
"format": "date-time",
"nullable": true
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDatastoreName": {
"type": "string",
"nullable": true
},
"childRunBasePath": {
"type": "string",
"nullable": true
},
"workingDirectory": {
"type": "string",
"nullable": true
},
"flowDagFileRelativePath": {
"type": "string",
"nullable": true
},
"flowSnapshotId": {
"type": "string",
"nullable": true
},
"studioPortalEndpoint": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowRunMode": {
"enum": [
"Flow",
"SingleNode",
"FromNode",
"BulkTest",
"Eval",
"PairwiseEval",
"ExperimentTest",
"ExperimentEval"
],
"type": "string"
},
"FlowRunResult": {
"type": "object",
"properties": {
"flow_runs": {
"type": "array",
"items": { },
"nullable": true
},
"node_runs": {
"type": "array",
"items": { },
"nullable": true
},
"errorResponse": {
"$ref": "#/components/schemas/ErrorResponse"
},
"flowName": {
"type": "string",
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"flowRunId": {
"type": "string",
"nullable": true
},
"flowGraph": {
"$ref": "#/components/schemas/FlowGraph"
},
"flowGraphLayout": {
"$ref": "#/components/schemas/FlowGraphLayout"
},
"flowRunResourceId": {
"type": "string",
"nullable": true
},
"bulkTestId": {
"type": "string",
"nullable": true
},
"batchInputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdOn": {
"type": "string",
"format": "date-time",
"nullable": true
},
"flowRunType": {
"$ref": "#/components/schemas/FlowRunTypeEnum"
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"runtimeName": {
"type": "string",
"nullable": true
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"flowRunLogs": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flowTestMode": {
"$ref": "#/components/schemas/FlowTestMode"
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"workingDirectory": {
"type": "string",
"nullable": true
},
"flowDagFileRelativePath": {
"type": "string",
"nullable": true
},
"flowSnapshotId": {
"type": "string",
"nullable": true
},
"variantRunToEvaluationRunsIdMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"FlowRunSettings": {
"type": "object",
"properties": {
"runMode": {
"$ref": "#/components/schemas/FlowRunMode"
},
"tuningNodeNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"tuningNodeSettings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TuningNodeSetting"
},
"description": "This is a dictionary",
"nullable": true
},
"baselineVariantId": {
"type": "string",
"nullable": true
},
"defaultVariantId": {
"type": "string",
"nullable": true
},
"variants": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Node"
}
},
"description": "This is a dictionary",
"nullable": true
},
"nodeName": {
"type": "string",
"nullable": true
},
"isDefaultVariant": {
"type": "boolean"
},
"nodeVariantId": {
"type": "string",
"nullable": true
},
"nodeOutputPaths": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"baseFlowRunId": {
"type": "string",
"nullable": true
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"bulkTestId": {
"type": "string",
"nullable": true
},
"evaluationFlowRunSettings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/EvaluationFlowRunSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"bulkTestFlowId": {
"type": "string",
"nullable": true
},
"bulkTestFlowRunIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"inputUniversalLink": {
"type": "string",
"nullable": true
},
"dataInputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"flowRunOutputDirectory": {
"type": "string",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"FlowRunSettingsBase": {
"type": "object",
"properties": {
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"inputUniversalLink": {
"type": "string",
"nullable": true
},
"dataInputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"flowRunOutputDirectory": {
"type": "string",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"FlowRunStatusEnum": {
"enum": [
"Started",
"Completed",
"Failed",
"Cancelled",
"NotStarted",
"Running",
"Queued",
"Paused",
"Unapproved",
"Starting",
"Preparing",
"CancelRequested",
"Pausing",
"Finalizing",
"Canceled",
"Bypassed"
],
"type": "string"
},
"FlowRunStatusResponse": {
"type": "object",
"properties": {
"flowRunStatus": {
"$ref": "#/components/schemas/FlowRunStatusEnum"
},
"lastCheckedTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"flowRunCreatedTime": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"FlowRunTypeEnum": {
"enum": [
"FlowRun",
"EvaluationRun",
"PairwiseEvaluationRun",
"SingleNodeRun",
"FromNodeRun"
],
"type": "string"
},
"FlowRuntimeCapability": {
"type": "object",
"properties": {
"flowFeatures": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowFeature"
},
"nullable": true
}
},
"additionalProperties": false
},
"FlowRuntimeDto": {
"type": "object",
"properties": {
"runtimeName": {
"type": "string",
"nullable": true
},
"runtimeDescription": {
"type": "string",
"nullable": true
},
"runtimeType": {
"$ref": "#/components/schemas/RuntimeType"
},
"environment": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/RuntimeStatusEnum"
},
"statusMessage": {
"type": "string",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
},
"fromExistingEndpoint": {
"type": "boolean"
},
"endpointName": {
"type": "string",
"nullable": true
},
"fromExistingDeployment": {
"type": "boolean"
},
"deploymentName": {
"type": "string",
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/ManagedServiceIdentity"
},
"instanceType": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32"
},
"computeInstanceName": {
"type": "string",
"nullable": true
},
"dockerImage": {
"type": "string",
"nullable": true
},
"publishedPort": {
"type": "integer",
"format": "int32"
},
"targetPort": {
"type": "integer",
"format": "int32"
},
"fromExistingCustomApp": {
"type": "boolean"
},
"customAppName": {
"type": "string",
"nullable": true
},
"assignedTo": {
"$ref": "#/components/schemas/AssignedUser"
},
"endpointUrl": {
"type": "string",
"nullable": true
},
"createdOn": {
"type": "string",
"format": "date-time"
},
"modifiedOn": {
"type": "string",
"format": "date-time"
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
}
},
"additionalProperties": false
},
"FlowSampleDto": {
"type": "object",
"properties": {
"sampleResourceId": {
"type": "string",
"nullable": true
},
"section": {
"$ref": "#/components/schemas/Section"
},
"indexNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"flowName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"details": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"flowRunSettings": {
"$ref": "#/components/schemas/FlowRunSettings"
},
"isArchived": {
"type": "boolean"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowSessionDto": {
"type": "object",
"properties": {
"sessionId": {
"type": "string",
"nullable": true
},
"baseImage": {
"type": "string",
"nullable": true
},
"packages": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
},
"flowFeatures": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowFeature"
},
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"runtimeDescription": {
"type": "string",
"nullable": true
},
"runtimeType": {
"$ref": "#/components/schemas/RuntimeType"
},
"environment": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/RuntimeStatusEnum"
},
"statusMessage": {
"type": "string",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
},
"fromExistingEndpoint": {
"type": "boolean"
},
"endpointName": {
"type": "string",
"nullable": true
},
"fromExistingDeployment": {
"type": "boolean"
},
"deploymentName": {
"type": "string",
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/ManagedServiceIdentity"
},
"instanceType": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32"
},
"computeInstanceName": {
"type": "string",
"nullable": true
},
"dockerImage": {
"type": "string",
"nullable": true
},
"publishedPort": {
"type": "integer",
"format": "int32"
},
"targetPort": {
"type": "integer",
"format": "int32"
},
"fromExistingCustomApp": {
"type": "boolean"
},
"customAppName": {
"type": "string",
"nullable": true
},
"assignedTo": {
"$ref": "#/components/schemas/AssignedUser"
},
"endpointUrl": {
"type": "string",
"nullable": true
},
"createdOn": {
"type": "string",
"format": "date-time"
},
"modifiedOn": {
"type": "string",
"format": "date-time"
},
"owner": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
}
},
"additionalProperties": false
},
"FlowSnapshot": {
"type": "object",
"properties": {
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowInputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowOutputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"nodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowNode"
},
"nullable": true
},
"node_variants": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowNodeVariant"
},
"description": "This is a dictionary",
"nullable": true
},
"environment": {
"$ref": "#/components/schemas/FlowEnvironment"
},
"environment_variables": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"language": {
"$ref": "#/components/schemas/FlowLanguage"
},
"path": {
"type": "string",
"nullable": true
},
"entry": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowSubmitRunSettings": {
"type": "object",
"properties": {
"nodeInputs": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"runMode": {
"$ref": "#/components/schemas/FlowRunMode"
},
"tuningNodeNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"tuningNodeSettings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TuningNodeSetting"
},
"description": "This is a dictionary",
"nullable": true
},
"baselineVariantId": {
"type": "string",
"nullable": true
},
"defaultVariantId": {
"type": "string",
"nullable": true
},
"variants": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Node"
}
},
"description": "This is a dictionary",
"nullable": true
},
"nodeName": {
"type": "string",
"nullable": true
},
"isDefaultVariant": {
"type": "boolean"
},
"nodeVariantId": {
"type": "string",
"nullable": true
},
"nodeOutputPaths": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"baseFlowRunId": {
"type": "string",
"nullable": true
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"bulkTestId": {
"type": "string",
"nullable": true
},
"evaluationFlowRunSettings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/EvaluationFlowRunSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"bulkTestFlowId": {
"type": "string",
"nullable": true
},
"bulkTestFlowRunIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"inputUniversalLink": {
"type": "string",
"nullable": true
},
"dataInputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"flowRunOutputDirectory": {
"type": "string",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"FlowTestInfo": {
"type": "object",
"properties": {
"variantId": {
"type": "string",
"nullable": true
},
"tuningNodeName": {
"type": "string",
"nullable": true
},
"flowRunId": {
"type": "string",
"nullable": true
},
"flowTestStorageSetting": {
"$ref": "#/components/schemas/FlowTestStorageSetting"
},
"flowRunType": {
"$ref": "#/components/schemas/FlowRunTypeEnum"
},
"variantRunId": {
"type": "string",
"nullable": true
},
"evaluationName": {
"type": "string",
"nullable": true
},
"outputUniversalLink": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowTestMode": {
"enum": [
"Sync",
"Async"
],
"type": "string"
},
"FlowTestStorageSetting": {
"type": "object",
"properties": {
"storageAccountName": {
"type": "string",
"nullable": true
},
"blobContainerName": {
"type": "string",
"nullable": true
},
"flowArtifactsRootPath": {
"type": "string",
"nullable": true
},
"outputDatastoreName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"FlowToolSettingParameter": {
"type": "object",
"properties": {
"type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ValueType"
},
"nullable": true
},
"default": {
"type": "string",
"nullable": true
},
"advanced": {
"type": "boolean",
"nullable": true
},
"enum": {
"type": "array",
"items": { },
"nullable": true
},
"model_list": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"text_box_size": {
"type": "integer",
"format": "int32",
"nullable": true
},
"capabilities": {
"$ref": "#/components/schemas/AzureOpenAIModelCapabilities"
},
"allow_manual_entry": {
"type": "boolean",
"nullable": true
},
"ui_hints": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"FlowToolsDto": {
"type": "object",
"properties": {
"package": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Tool"
},
"description": "This is a dictionary",
"nullable": true
},
"code": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Tool"
},
"description": "This is a dictionary",
"nullable": true
},
"errors": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ErrorResponse"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"FlowType": {
"enum": [
"Default",
"Evaluation",
"Chat",
"Rag"
],
"type": "string"
},
"FlowVariantNode": {
"type": "object",
"properties": {
"node": {
"$ref": "#/components/schemas/FlowNode"
},
"description": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ForecastHorizon": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/ForecastHorizonMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"ForecastHorizonMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"ForecastingSettings": {
"type": "object",
"properties": {
"countryOrRegionForHolidays": {
"type": "string",
"nullable": true
},
"timeColumnName": {
"type": "string",
"nullable": true
},
"targetLags": {
"$ref": "#/components/schemas/TargetLags"
},
"targetRollingWindowSize": {
"$ref": "#/components/schemas/TargetRollingWindowSize"
},
"forecastHorizon": {
"$ref": "#/components/schemas/ForecastHorizon"
},
"timeSeriesIdColumnNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"frequency": {
"type": "string",
"nullable": true
},
"featureLags": {
"type": "string",
"nullable": true
},
"seasonality": {
"$ref": "#/components/schemas/Seasonality"
},
"shortSeriesHandlingConfig": {
"$ref": "#/components/schemas/ShortSeriesHandlingConfiguration"
},
"useStl": {
"$ref": "#/components/schemas/UseStl"
},
"targetAggregateFunction": {
"$ref": "#/components/schemas/TargetAggregationFunction"
},
"cvStepSize": {
"type": "integer",
"format": "int32",
"nullable": true
},
"featuresUnknownAtForecastTime": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"Framework": {
"enum": [
"Python",
"PySpark",
"Cntk",
"TensorFlow",
"PyTorch",
"PySparkInteractive",
"R"
],
"type": "string"
},
"Frequency": {
"enum": [
"Month",
"Week",
"Day",
"Hour",
"Minute"
],
"type": "string"
},
"GeneralSettings": {
"type": "object",
"properties": {
"primaryMetric": {
"$ref": "#/components/schemas/PrimaryMetrics"
},
"taskType": {
"$ref": "#/components/schemas/TaskType"
},
"logVerbosity": {
"$ref": "#/components/schemas/LogVerbosity"
}
},
"additionalProperties": false
},
"GeneratePipelineComponentRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"moduleScope": {
"$ref": "#/components/schemas/ModuleScope"
},
"isDeterministic": {
"type": "boolean",
"nullable": true
},
"category": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"setAsDefaultVersion": {
"type": "boolean"
},
"registryName": {
"type": "string",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"GenerateToolMetaRequest": {
"type": "object",
"properties": {
"tools": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ToolSourceMeta"
},
"description": "This is a dictionary",
"nullable": true
},
"working_dir": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GetDynamicListRequest": {
"type": "object",
"properties": {
"func_path": {
"type": "string",
"nullable": true
},
"func_kwargs": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"GetRunDataResultDto": {
"type": "object",
"properties": {
"runMetadata": {
"$ref": "#/components/schemas/RunDto"
},
"runDefinition": {
"nullable": true
},
"jobSpecification": {
"nullable": true
},
"systemSettings": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"GetTrainingSessionDto": {
"type": "object",
"properties": {
"properties": {
"$ref": "#/components/schemas/SessionProperties"
},
"compute": {
"$ref": "#/components/schemas/ComputeContract"
}
},
"additionalProperties": false
},
"GlobalJobDispatcherConfiguration": {
"type": "object",
"properties": {
"vmSize": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"computeType": {
"$ref": "#/components/schemas/GlobalJobDispatcherSupportedComputeType"
},
"region": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"myResourceOnly": {
"type": "boolean"
},
"redispatchAllowed": {
"type": "boolean",
"nullable": true
},
"lowPriorityVMTolerant": {
"type": "boolean"
},
"vcList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"planId": {
"type": "string",
"nullable": true
},
"planRegionId": {
"type": "string",
"nullable": true
},
"vcBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"clusterBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"GlobalJobDispatcherSupportedComputeType": {
"enum": [
"AmlCompute",
"AmlK8s"
],
"type": "string"
},
"GlobsOptions": {
"type": "object",
"properties": {
"globPatterns": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"GraphAnnotationNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"content": {
"type": "string",
"nullable": true
},
"mentionedNodeNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"structuredContent": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphComponentsMode": {
"enum": [
"Normal",
"AllDesignerBuildin",
"ContainsDesignerBuildin"
],
"type": "string"
},
"GraphControlNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"controlType": {
"$ref": "#/components/schemas/ControlType"
},
"controlParameter": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphControlReferenceNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"controlFlowType": {
"$ref": "#/components/schemas/ControlFlowType"
},
"referenceNodeId": {
"type": "string",
"nullable": true
},
"doWhileControlFlowInfo": {
"$ref": "#/components/schemas/DoWhileControlFlowInfo"
},
"parallelForControlFlowInfo": {
"$ref": "#/components/schemas/ParallelForControlFlowInfo"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphDatasetNode": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"datasetId": {
"type": "string",
"nullable": true
},
"dataPathParameterName": {
"type": "string",
"nullable": true
},
"dataSetDefinition": {
"$ref": "#/components/schemas/DataSetDefinition"
}
},
"additionalProperties": false
},
"GraphDatasetsLoadModes": {
"enum": [
"SkipDatasetsLoad",
"V1RegisteredDataset",
"V1SavedDataset",
"PersistDatasetsInfo",
"SubmissionNeededUpstreamDatasetOnly",
"SubmissionNeededInCompleteDatasetOnly",
"V2Asset",
"Submission",
"AllRegisteredData",
"AllData"
],
"type": "string"
},
"GraphDraftEntity": {
"type": "object",
"properties": {
"moduleNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNode"
},
"nullable": true
},
"datasetNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphDatasetNode"
},
"nullable": true
},
"subGraphNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphReferenceNode"
},
"nullable": true
},
"controlReferenceNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphControlReferenceNode"
},
"nullable": true
},
"controlNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphControlNode"
},
"nullable": true
},
"edges": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphEdge"
},
"nullable": true
},
"entityInterface": {
"$ref": "#/components/schemas/EntityInterface"
},
"graphLayout": {
"$ref": "#/components/schemas/GraphLayout"
},
"createdBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"lastUpdatedBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"defaultCompute": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"defaultCloudPriority": {
"$ref": "#/components/schemas/CloudPrioritySetting"
},
"extendedProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"parentSubGraphModuleIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"GraphEdge": {
"type": "object",
"properties": {
"sourceOutputPort": {
"$ref": "#/components/schemas/PortInfo"
},
"destinationInputPort": {
"$ref": "#/components/schemas/PortInfo"
}
},
"additionalProperties": false
},
"GraphLayout": {
"type": "object",
"properties": {
"nodeLayouts": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/NodeLayout"
},
"description": "This is a dictionary",
"nullable": true
},
"extendedData": {
"type": "string",
"nullable": true
},
"annotationNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphAnnotationNode"
},
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"GraphLayoutCreationInfo": {
"type": "object",
"properties": {
"nodeLayouts": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/NodeLayout"
},
"description": "This is a dictionary",
"nullable": true
},
"extendedData": {
"type": "string",
"nullable": true
},
"annotationNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphAnnotationNode"
},
"nullable": true
}
},
"additionalProperties": false
},
"GraphModuleNode": {
"type": "object",
"properties": {
"moduleType": {
"$ref": "#/components/schemas/ModuleType"
},
"runconfig": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"moduleParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"moduleMetadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"moduleOutputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputSetting"
},
"nullable": true
},
"moduleInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InputSetting"
},
"nullable": true
},
"useGraphDefaultCompute": {
"type": "boolean"
},
"useGraphDefaultDatastore": {
"type": "boolean"
},
"regenerateOutput": {
"type": "boolean"
},
"controlInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ControlInput"
},
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/CloudSettings"
},
"executionPhase": {
"$ref": "#/components/schemas/ExecutionPhase"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphModuleNodeRunSetting": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"stepType": {
"type": "string",
"nullable": true
},
"runSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"GraphModuleNodeUIInputSetting": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"moduleInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/UIInputSetting"
},
"nullable": true
}
},
"additionalProperties": false
},
"GraphNodeStatusInfo": {
"type": "object",
"properties": {
"status": {
"$ref": "#/components/schemas/TaskStatusCode"
},
"runStatus": {
"$ref": "#/components/schemas/RunStatus"
},
"isBypassed": {
"type": "boolean"
},
"hasFailedChildRun": {
"type": "boolean"
},
"partiallyExecuted": {
"type": "boolean"
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"aetherStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"aetherEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"aetherCreationTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryCreationTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"reuseInfo": {
"$ref": "#/components/schemas/TaskReuseInfo"
},
"controlFlowInfo": {
"$ref": "#/components/schemas/TaskControlFlowInfo"
},
"statusCode": {
"$ref": "#/components/schemas/TaskStatusCode"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"creationTime": {
"type": "string",
"format": "date-time"
},
"scheduleTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"requestId": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"dataContainerId": {
"type": "string",
"nullable": true
},
"realTimeLogPath": {
"type": "string",
"nullable": true
},
"hasWarnings": {
"type": "boolean"
},
"compositeNodeId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphReferenceNode": {
"type": "object",
"properties": {
"graphId": {
"type": "string",
"nullable": true
},
"defaultCompute": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"id": {
"type": "string",
"nullable": true
},
"moduleId": {
"type": "string",
"nullable": true
},
"comment": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"moduleParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"moduleMetadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"moduleOutputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputSetting"
},
"nullable": true
},
"moduleInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InputSetting"
},
"nullable": true
},
"useGraphDefaultCompute": {
"type": "boolean"
},
"useGraphDefaultDatastore": {
"type": "boolean"
},
"regenerateOutput": {
"type": "boolean"
},
"controlInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ControlInput"
},
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/CloudSettings"
},
"executionPhase": {
"$ref": "#/components/schemas/ExecutionPhase"
},
"runAttribution": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"GraphSdkCodeType": {
"enum": [
"Python",
"JupyterNotebook",
"Unknown"
],
"type": "string"
},
"HdfsReference": {
"type": "object",
"properties": {
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"HdiClusterComputeInfo": {
"type": "object",
"properties": {
"address": {
"type": "string",
"nullable": true
},
"username": {
"type": "string",
"nullable": true
},
"password": {
"type": "string",
"nullable": true
},
"privateKey": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"HdiConfiguration": {
"type": "object",
"properties": {
"yarnDeployMode": {
"$ref": "#/components/schemas/YarnDeployMode"
}
},
"additionalProperties": false
},
"HdiRunConfiguration": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"className": {
"type": "string",
"nullable": true
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
},
"queue": {
"type": "string",
"nullable": true
},
"driverMemory": {
"type": "string",
"nullable": true
},
"driverCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"executorMemory": {
"type": "string",
"nullable": true
},
"executorCores": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numberExecutors": {
"type": "integer",
"format": "int32",
"nullable": true
},
"conf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"HistoryConfiguration": {
"type": "object",
"properties": {
"outputCollection": {
"type": "boolean",
"default": true
},
"directoriesToWatch": {
"type": "array",
"items": {
"type": "string"
},
"default": [
"logs"
],
"nullable": true
},
"enableMLflowTracking": {
"type": "boolean",
"default": true
}
}
},
"HttpStatusCode": {
"enum": [
"Continue",
"SwitchingProtocols",
"Processing",
"EarlyHints",
"OK",
"Created",
"Accepted",
"NonAuthoritativeInformation",
"NoContent",
"ResetContent",
"PartialContent",
"MultiStatus",
"AlreadyReported",
"IMUsed",
"MultipleChoices",
"Ambiguous",
"MovedPermanently",
"Moved",
"Found",
"Redirect",
"SeeOther",
"RedirectMethod",
"NotModified",
"UseProxy",
"Unused",
"TemporaryRedirect",
"RedirectKeepVerb",
"PermanentRedirect",
"BadRequest",
"Unauthorized",
"PaymentRequired",
"Forbidden",
"NotFound",
"MethodNotAllowed",
"NotAcceptable",
"ProxyAuthenticationRequired",
"RequestTimeout",
"Conflict",
"Gone",
"LengthRequired",
"PreconditionFailed",
"RequestEntityTooLarge",
"RequestUriTooLong",
"UnsupportedMediaType",
"RequestedRangeNotSatisfiable",
"ExpectationFailed",
"MisdirectedRequest",
"UnprocessableEntity",
"Locked",
"FailedDependency",
"UpgradeRequired",
"PreconditionRequired",
"TooManyRequests",
"RequestHeaderFieldsTooLarge",
"UnavailableForLegalReasons",
"InternalServerError",
"NotImplemented",
"BadGateway",
"ServiceUnavailable",
"GatewayTimeout",
"HttpVersionNotSupported",
"VariantAlsoNegotiates",
"InsufficientStorage",
"LoopDetected",
"NotExtended",
"NetworkAuthenticationRequired"
],
"type": "string"
},
"HyperDriveConfiguration": {
"type": "object",
"properties": {
"hyperDriveRunConfig": {
"type": "string",
"nullable": true
},
"primaryMetricGoal": {
"type": "string",
"nullable": true
},
"primaryMetricName": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"IActionResult": {
"type": "object",
"additionalProperties": false
},
"ICheckableLongRunningOperationResponse": {
"type": "object",
"properties": {
"completionResult": {
"$ref": "#/components/schemas/LongRunningNullResponse"
},
"location": {
"type": "string",
"format": "uri",
"nullable": true
},
"operationResult": {
"type": "string",
"format": "uri",
"nullable": true
}
},
"additionalProperties": false
},
"IdentityConfiguration": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/IdentityType"
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"secret": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"IdentitySetting": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/AEVAIdentityType"
},
"clientId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"objectId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"msiResourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"IdentityType": {
"enum": [
"Managed",
"ServicePrincipal",
"AMLToken"
],
"type": "string"
},
"ImportDataTask": {
"type": "object",
"properties": {
"DataTransferSource": {
"$ref": "#/components/schemas/DataTransferSource"
}
},
"additionalProperties": false
},
"IndexedErrorResponse": {
"type": "object",
"properties": {
"code": {
"type": "string",
"nullable": true
},
"errorCodeHierarchy": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
},
"time": {
"type": "string",
"format": "date-time",
"nullable": true
},
"componentName": {
"type": "string",
"nullable": true
},
"severity": {
"type": "integer",
"format": "int32",
"nullable": true
},
"detailsUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"referenceCode": {
"type": "string",
"nullable": true
}
}
},
"InitScriptInfoDto": {
"type": "object",
"properties": {
"dbfs": {
"$ref": "#/components/schemas/DbfsStorageInfoDto"
}
},
"additionalProperties": false
},
"InnerErrorDetails": {
"type": "object",
"properties": {
"code": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"InnerErrorResponse": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "The error code.",
"nullable": true
},
"innerError": {
"$ref": "#/components/schemas/InnerErrorResponse"
}
},
"additionalProperties": false,
"description": "A nested structure of errors."
},
"InputAsset": {
"type": "object",
"properties": {
"asset": {
"$ref": "#/components/schemas/Asset"
},
"mechanism": {
"$ref": "#/components/schemas/DeliveryMechanism"
},
"environmentVariableName": {
"type": "string",
"nullable": true
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"InputData": {
"type": "object",
"properties": {
"datasetId": {
"type": "string",
"nullable": true
},
"mode": {
"$ref": "#/components/schemas/DataBindingMode"
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"InputDataBinding": {
"type": "object",
"properties": {
"dataId": {
"type": "string",
"nullable": true
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"mode": {
"$ref": "#/components/schemas/DataBindingMode"
},
"description": {
"type": "string",
"nullable": true
},
"uri": {
"$ref": "#/components/schemas/MfeInternalUriReference"
},
"value": {
"type": "string",
"nullable": true
},
"assetUri": {
"type": "string",
"nullable": true
},
"jobInputType": {
"$ref": "#/components/schemas/JobInputType"
}
},
"additionalProperties": false
},
"InputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ValueType"
},
"nullable": true
},
"default": {
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"enum": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enabled_by": {
"type": "string",
"nullable": true
},
"enabled_by_type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ValueType"
},
"nullable": true
},
"enabled_by_value": {
"type": "array",
"items": { },
"nullable": true
},
"model_list": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"capabilities": {
"$ref": "#/components/schemas/AzureOpenAIModelCapabilities"
},
"dynamic_list": {
"$ref": "#/components/schemas/ToolInputDynamicList"
},
"allow_manual_entry": {
"type": "boolean"
},
"is_multi_select": {
"type": "boolean"
},
"generated_by": {
"$ref": "#/components/schemas/ToolInputGeneratedBy"
},
"input_type": {
"$ref": "#/components/schemas/InputType"
},
"advanced": {
"type": "boolean",
"nullable": true
},
"ui_hints": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"InputOutputPortMetadata": {
"type": "object",
"properties": {
"graphModuleNodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"schema": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true,
"readOnly": true
}
},
"additionalProperties": false
},
"InputSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"options": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"additionalTransformations": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"InputType": {
"enum": [
"default",
"uionly_hidden"
],
"type": "string"
},
"IntellectualPropertyAccessMode": {
"enum": [
"ReadOnly",
"ReadWrite"
],
"type": "string"
},
"IntellectualPropertyPublisherInformation": {
"type": "object",
"properties": {
"intellectualPropertyPublisher": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"InteractiveConfig": {
"type": "object",
"properties": {
"isSSHEnabled": {
"type": "boolean",
"nullable": true
},
"sshPublicKey": {
"type": "string",
"nullable": true
},
"isIPythonEnabled": {
"type": "boolean",
"nullable": true
},
"isTensorBoardEnabled": {
"type": "boolean",
"nullable": true
},
"interactivePort": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"InteractiveConfiguration": {
"type": "object",
"properties": {
"isSSHEnabled": {
"type": "boolean",
"nullable": true
},
"sshPublicKey": {
"type": "string",
"nullable": true
},
"isIPythonEnabled": {
"type": "boolean",
"nullable": true
},
"isTensorBoardEnabled": {
"type": "boolean",
"nullable": true
},
"interactivePort": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"JobCost": {
"type": "object",
"properties": {
"chargedCpuCoreSeconds": {
"type": "number",
"format": "double",
"nullable": true
},
"chargedCpuMemoryMegabyteSeconds": {
"type": "number",
"format": "double",
"nullable": true
},
"chargedGpuSeconds": {
"type": "number",
"format": "double",
"nullable": true
},
"chargedNodeUtilizationSeconds": {
"type": "number",
"format": "double",
"nullable": true
}
}
},
"JobEndpoint": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"port": {
"type": "integer",
"format": "int32",
"nullable": true
},
"endpoint": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"errorMessage": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"nodes": {
"$ref": "#/components/schemas/MfeInternalNodes"
}
},
"additionalProperties": false
},
"JobInput": {
"required": [
"jobInputType"
],
"type": "object",
"properties": {
"jobInputType": {
"$ref": "#/components/schemas/JobInputType"
},
"description": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"JobInputType": {
"enum": [
"Dataset",
"Uri",
"Literal",
"UriFile",
"UriFolder",
"MLTable",
"CustomModel",
"MLFlowModel",
"TritonModel"
],
"type": "string"
},
"JobLimitsType": {
"enum": [
"Command",
"Sweep"
],
"type": "string"
},
"JobOutput": {
"required": [
"jobOutputType"
],
"type": "object",
"properties": {
"jobOutputType": {
"$ref": "#/components/schemas/JobOutputType"
},
"description": {
"type": "string",
"nullable": true
},
"autoDeleteSetting": {
"$ref": "#/components/schemas/AutoDeleteSetting"
}
},
"additionalProperties": false
},
"JobOutputArtifacts": {
"type": "object",
"properties": {
"datastoreId": {
"type": "string",
"nullable": true
},
"path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"JobOutputType": {
"enum": [
"Uri",
"Dataset",
"UriFile",
"UriFolder",
"MLTable",
"CustomModel",
"MLFlowModel",
"TritonModel"
],
"type": "string"
},
"JobProvisioningState": {
"enum": [
"Succeeded",
"Failed",
"Canceled",
"InProgress"
],
"type": "string"
},
"JobScheduleDto": {
"type": "object",
"properties": {
"jobType": {
"$ref": "#/components/schemas/JobType"
},
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"name": {
"type": "string",
"nullable": true
},
"jobDefinitionId": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"triggerType": {
"$ref": "#/components/schemas/TriggerType"
},
"recurrence": {
"$ref": "#/components/schemas/Recurrence"
},
"cron": {
"$ref": "#/components/schemas/Cron"
},
"status": {
"$ref": "#/components/schemas/ScheduleStatus"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"JobStatus": {
"enum": [
"NotStarted",
"Starting",
"Provisioning",
"Preparing",
"Queued",
"Running",
"Finalizing",
"CancelRequested",
"Completed",
"Failed",
"Canceled",
"NotResponding",
"Paused",
"Unknown",
"Scheduled"
],
"type": "string"
},
"JobType": {
"enum": [
"Command",
"Sweep",
"Labeling",
"Pipeline",
"Data",
"AutoML",
"Spark",
"Base"
],
"type": "string"
},
"K8sConfiguration": {
"type": "object",
"properties": {
"maxRetryCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"resourceConfiguration": {
"$ref": "#/components/schemas/ResourceConfig"
},
"priorityConfiguration": {
"$ref": "#/components/schemas/PriorityConfig"
},
"interactiveConfiguration": {
"$ref": "#/components/schemas/InteractiveConfig"
}
},
"additionalProperties": false
},
"KeyType": {
"enum": [
"Primary",
"Secondary"
],
"type": "string"
},
"KeyValuePairComponentNameMetaInfoErrorResponse": {
"type": "object",
"properties": {
"key": {
"$ref": "#/components/schemas/ComponentNameMetaInfo"
},
"value": {
"$ref": "#/components/schemas/ErrorResponse"
}
},
"additionalProperties": false
},
"KeyValuePairComponentNameMetaInfoModuleDto": {
"type": "object",
"properties": {
"key": {
"$ref": "#/components/schemas/ComponentNameMetaInfo"
},
"value": {
"$ref": "#/components/schemas/ModuleDto"
}
},
"additionalProperties": false
},
"KeyValuePairStringObject": {
"type": "object",
"properties": {
"key": {
"type": "string",
"nullable": true
},
"value": {
"nullable": true
}
},
"additionalProperties": false
},
"KubernetesConfiguration": {
"type": "object",
"properties": {
"instanceType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Kwarg": {
"type": "object",
"properties": {
"key": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"LegacyDataPath": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"relativePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"LimitSettings": {
"type": "object",
"properties": {
"maxTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"trialTimeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"maxConcurrentTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxCoresPerTrial": {
"type": "integer",
"format": "int32",
"nullable": true
},
"exitScore": {
"type": "number",
"format": "double",
"nullable": true
},
"enableEarlyTermination": {
"type": "boolean",
"nullable": true
},
"maxNodes": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"LinkedADBWorkspaceMetadata": {
"type": "object",
"properties": {
"workspaceId": {
"type": "string",
"nullable": true
},
"region": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"LinkedPipelineInfo": {
"type": "object",
"properties": {
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"moduleNodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"linkedPipelineDraftId": {
"type": "string",
"nullable": true
},
"linkedPipelineRunId": {
"type": "string",
"nullable": true
},
"isDirectLink": {
"type": "boolean"
}
},
"additionalProperties": false
},
"ListViewType": {
"enum": [
"ActiveOnly",
"ArchivedOnly",
"All"
],
"type": "string"
},
"LoadFlowAsComponentRequest": {
"type": "object",
"properties": {
"componentName": {
"type": "string",
"nullable": true
},
"componentVersion": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"isDeterministic": {
"type": "boolean"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"flowDefinitionResourceId": {
"type": "string",
"nullable": true
},
"flowDefinitionDataStoreName": {
"type": "string",
"nullable": true
},
"flowDefinitionBlobPath": {
"type": "string",
"nullable": true
},
"flowDefinitionDataUri": {
"type": "string",
"nullable": true
},
"nodeVariant": {
"type": "string",
"nullable": true
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"sessionId": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
}
},
"additionalProperties": false
},
"LogLevel": {
"enum": [
"Trace",
"Debug",
"Information",
"Warning",
"Error",
"Critical",
"None"
],
"type": "string"
},
"LogRunTerminatedEventDto": {
"type": "object",
"properties": {
"nextActionIntervalInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"actionType": {
"$ref": "#/components/schemas/ActionType"
},
"lastCheckedTime": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"LogVerbosity": {
"enum": [
"NotSet",
"Debug",
"Info",
"Warning",
"Error",
"Critical"
],
"type": "string"
},
"LongRunningNullResponse": {
"type": "object",
"additionalProperties": false
},
"LongRunningOperationUriResponse": {
"type": "object",
"properties": {
"location": {
"type": "string",
"format": "uri",
"nullable": true
},
"operationResult": {
"type": "string",
"format": "uri",
"nullable": true
}
},
"additionalProperties": false
},
"LongRunningUpdateRegistryComponentRequest": {
"type": "object",
"properties": {
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"registryName": {
"type": "string",
"nullable": true
},
"componentName": {
"type": "string",
"nullable": true
},
"componentVersion": {
"type": "string",
"nullable": true
},
"updateType": {
"$ref": "#/components/schemas/LongRunningUpdateType"
}
},
"additionalProperties": false
},
"LongRunningUpdateType": {
"enum": [
"EnableModule",
"DisableModule",
"UpdateDisplayName",
"UpdateDescription",
"UpdateTags"
],
"type": "string"
},
"MLFlowAutologgerState": {
"enum": [
"Enabled",
"Disabled"
],
"type": "string"
},
"ManagedServiceIdentity": {
"required": [
"type"
],
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/ManagedServiceIdentityType"
},
"principalId": {
"type": "string",
"format": "uuid"
},
"tenantId": {
"type": "string",
"format": "uuid"
},
"userAssignedIdentities": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/UserAssignedIdentity"
},
"nullable": true
}
},
"additionalProperties": false
},
"ManagedServiceIdentityType": {
"enum": [
"SystemAssigned",
"UserAssigned",
"SystemAssignedUserAssigned",
"None"
],
"type": "string"
},
"MavenLibraryDto": {
"type": "object",
"properties": {
"coordinates": {
"type": "string",
"nullable": true
},
"repo": {
"type": "string",
"nullable": true
},
"exclusions": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"MetricProperties": {
"type": "object",
"properties": {
"uxMetricType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"MetricSchemaDto": {
"type": "object",
"properties": {
"numProperties": {
"type": "integer",
"format": "int32"
},
"properties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/MetricSchemaPropertyDto"
},
"nullable": true
}
},
"additionalProperties": false
},
"MetricSchemaPropertyDto": {
"type": "object",
"properties": {
"propertyId": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"MetricV2Dto": {
"type": "object",
"properties": {
"dataContainerId": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"columns": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MetricValueType"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"$ref": "#/components/schemas/MetricProperties"
},
"namespace": {
"type": "string",
"nullable": true
},
"standardSchemaId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/MetricV2Value"
},
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false
},
"MetricV2Value": {
"type": "object",
"properties": {
"metricId": {
"type": "string",
"nullable": true
},
"createdUtc": {
"type": "string",
"format": "date-time"
},
"step": {
"type": "integer",
"format": "int64",
"nullable": true
},
"data": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"sasUri": {
"type": "string",
"format": "uri",
"nullable": true
}
},
"additionalProperties": false
},
"MetricValueType": {
"enum": [
"Int",
"Double",
"String",
"Bool",
"Artifact",
"Histogram",
"Malformed"
],
"type": "string"
},
"MfeInternalAutologgerSettings": {
"type": "object",
"properties": {
"mlflowAutologger": {
"$ref": "#/components/schemas/MfeInternalMLFlowAutologgerState"
}
},
"additionalProperties": false
},
"MfeInternalIdentityConfiguration": {
"type": "object",
"properties": {
"identityType": {
"$ref": "#/components/schemas/MfeInternalIdentityType"
}
},
"additionalProperties": false
},
"MfeInternalIdentityType": {
"enum": [
"Managed",
"AMLToken",
"UserIdentity"
],
"type": "string"
},
"MfeInternalMLFlowAutologgerState": {
"enum": [
"Enabled",
"Disabled"
],
"type": "string"
},
"MfeInternalNodes": {
"type": "object",
"properties": {
"nodesValueType": {
"$ref": "#/components/schemas/MfeInternalNodesValueType"
}
},
"additionalProperties": false
},
"MfeInternalNodesValueType": {
"enum": [
"All"
],
"type": "string"
},
"MfeInternalOutputData": {
"type": "object",
"properties": {
"datasetName": {
"type": "string",
"nullable": true
},
"datastore": {
"type": "string",
"nullable": true
},
"datapath": {
"type": "string",
"nullable": true
},
"mode": {
"$ref": "#/components/schemas/DataBindingMode"
}
},
"additionalProperties": false
},
"MfeInternalPipelineType": {
"enum": [
"AzureML"
],
"type": "string"
},
"MfeInternalScheduleStatus": {
"enum": [
"Enabled",
"Disabled"
],
"type": "string"
},
"MfeInternalSecretConfiguration": {
"type": "object",
"properties": {
"workspaceSecretName": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"MfeInternalUriReference": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"folder": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"MfeInternalV20211001ComponentJob": {
"type": "object",
"properties": {
"computeId": {
"type": "string",
"nullable": true
},
"componentId": {
"type": "string",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobInput"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobOutput"
},
"description": "This is a dictionary",
"nullable": true
},
"overrides": {
"nullable": true
}
},
"additionalProperties": false
},
"MinMaxParameterRule": {
"type": "object",
"properties": {
"min": {
"type": "number",
"format": "double",
"nullable": true
},
"max": {
"type": "number",
"format": "double",
"nullable": true
}
},
"additionalProperties": false
},
"MlcComputeInfo": {
"type": "object",
"properties": {
"mlcComputeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ModelDto": {
"type": "object",
"properties": {
"feedName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"amlDataStoreName": {
"type": "string",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"armId": {
"type": "string",
"nullable": true
},
"onlineEndpointYamlStr": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ModelManagementErrorResponse": {
"type": "object",
"properties": {
"code": {
"type": "string",
"nullable": true
},
"statusCode": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"details": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InnerErrorDetails"
},
"nullable": true
},
"correlation": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"ModifyPipelineJobScheduleDto": {
"type": "object",
"properties": {
"pipelineJobName": {
"type": "string",
"nullable": true
},
"pipelineJobRuntimeSettings": {
"$ref": "#/components/schemas/PipelineJobRuntimeBasicSettings"
},
"displayName": {
"type": "string",
"nullable": true
},
"triggerType": {
"$ref": "#/components/schemas/TriggerType"
},
"recurrence": {
"$ref": "#/components/schemas/Recurrence"
},
"cron": {
"$ref": "#/components/schemas/Cron"
},
"status": {
"$ref": "#/components/schemas/ScheduleStatus"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ModuleDto": {
"type": "object",
"properties": {
"namespace": {
"type": "string",
"nullable": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"dictTags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"moduleVersionId": {
"type": "string",
"nullable": true
},
"feedName": {
"type": "string",
"nullable": true
},
"registryName": {
"type": "string",
"nullable": true
},
"moduleName": {
"type": "string",
"nullable": true
},
"moduleVersion": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"owner": {
"type": "string",
"nullable": true
},
"jobType": {
"type": "string",
"nullable": true
},
"defaultVersion": {
"type": "string",
"nullable": true
},
"familyId": {
"type": "string",
"nullable": true
},
"helpDocument": {
"type": "string",
"nullable": true
},
"codegenBy": {
"type": "string",
"nullable": true
},
"armId": {
"type": "string",
"nullable": true
},
"moduleScope": {
"$ref": "#/components/schemas/ModuleScope"
},
"moduleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"inputTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"outputTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"yamlLink": {
"type": "string",
"nullable": true
},
"yamlLinkWithCommitSha": {
"type": "string",
"nullable": true
},
"moduleSourceType": {
"$ref": "#/components/schemas/ModuleSourceType"
},
"registeredBy": {
"type": "string",
"nullable": true
},
"versions": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AzureMLModuleVersionDescriptor"
},
"nullable": true
},
"isDefaultModuleVersion": {
"type": "boolean",
"nullable": true
},
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"systemMeta": {
"$ref": "#/components/schemas/SystemMeta"
},
"snapshotId": {
"type": "string",
"nullable": true
},
"entry": {
"type": "string",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"requireGpu": {
"type": "boolean",
"nullable": true
},
"modulePythonInterface": {
"$ref": "#/components/schemas/ModulePythonInterface"
},
"environmentAssetId": {
"type": "string",
"nullable": true
},
"runSettingParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameter"
},
"nullable": true
},
"supportedUIInputDataDeliveryModes": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/UIInputDataDeliveryMode"
},
"nullable": true
},
"nullable": true
},
"outputSettingSpecs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputSettingSpec"
},
"nullable": true
},
"yamlStr": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ModuleDtoFields": {
"enum": [
"Definition",
"YamlStr",
"RegistrationContext",
"RunSettingParameters",
"RunDefinition",
"All",
"Default",
"Basic",
"Minimal"
],
"type": "string"
},
"ModuleDtoWithErrors": {
"type": "object",
"properties": {
"versionIdToModuleDto": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ModuleDto"
},
"description": "This is a dictionary",
"nullable": true
},
"nameAndVersionToModuleDto": {
"type": "array",
"items": {
"$ref": "#/components/schemas/KeyValuePairComponentNameMetaInfoModuleDto"
},
"nullable": true
},
"versionIdToError": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ErrorResponse"
},
"description": "This is a dictionary",
"nullable": true
},
"nameAndVersionToError": {
"type": "array",
"items": {
"$ref": "#/components/schemas/KeyValuePairComponentNameMetaInfoErrorResponse"
},
"nullable": true
}
},
"additionalProperties": false
},
"ModuleDtoWithValidateStatus": {
"type": "object",
"properties": {
"existingModuleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"status": {
"$ref": "#/components/schemas/ModuleInfoFromYamlStatusEnum"
},
"statusDetails": {
"type": "string",
"nullable": true
},
"errorDetails": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"serializedModuleInfo": {
"type": "string",
"nullable": true
},
"namespace": {
"type": "string",
"nullable": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"dictTags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"moduleVersionId": {
"type": "string",
"nullable": true
},
"feedName": {
"type": "string",
"nullable": true
},
"registryName": {
"type": "string",
"nullable": true
},
"moduleName": {
"type": "string",
"nullable": true
},
"moduleVersion": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"owner": {
"type": "string",
"nullable": true
},
"jobType": {
"type": "string",
"nullable": true
},
"defaultVersion": {
"type": "string",
"nullable": true
},
"familyId": {
"type": "string",
"nullable": true
},
"helpDocument": {
"type": "string",
"nullable": true
},
"codegenBy": {
"type": "string",
"nullable": true
},
"armId": {
"type": "string",
"nullable": true
},
"moduleScope": {
"$ref": "#/components/schemas/ModuleScope"
},
"moduleEntity": {
"$ref": "#/components/schemas/ModuleEntity"
},
"inputTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"outputTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
},
"yamlLink": {
"type": "string",
"nullable": true
},
"yamlLinkWithCommitSha": {
"type": "string",
"nullable": true
},
"moduleSourceType": {
"$ref": "#/components/schemas/ModuleSourceType"
},
"registeredBy": {
"type": "string",
"nullable": true
},
"versions": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AzureMLModuleVersionDescriptor"
},
"nullable": true
},
"isDefaultModuleVersion": {
"type": "boolean",
"nullable": true
},
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"systemMeta": {
"$ref": "#/components/schemas/SystemMeta"
},
"snapshotId": {
"type": "string",
"nullable": true
},
"entry": {
"type": "string",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"requireGpu": {
"type": "boolean",
"nullable": true
},
"modulePythonInterface": {
"$ref": "#/components/schemas/ModulePythonInterface"
},
"environmentAssetId": {
"type": "string",
"nullable": true
},
"runSettingParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameter"
},
"nullable": true
},
"supportedUIInputDataDeliveryModes": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/UIInputDataDeliveryMode"
},
"nullable": true
},
"nullable": true
},
"outputSettingSpecs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputSettingSpec"
},
"nullable": true
},
"yamlStr": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ModuleEntity": {
"type": "object",
"properties": {
"displayName": {
"type": "string",
"nullable": true
},
"moduleExecutionType": {
"type": "string",
"nullable": true
},
"moduleType": {
"$ref": "#/components/schemas/ModuleType"
},
"moduleTypeVersion": {
"type": "string",
"nullable": true
},
"uploadState": {
"$ref": "#/components/schemas/UploadState"
},
"isDeterministic": {
"type": "boolean"
},
"structuredInterface": {
"$ref": "#/components/schemas/StructuredInterface"
},
"dataLocation": {
"$ref": "#/components/schemas/DataLocation"
},
"identifierHash": {
"type": "string",
"nullable": true
},
"identifierHashV2": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"lastUpdatedBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"runconfig": {
"type": "string",
"nullable": true
},
"cloudSettings": {
"$ref": "#/components/schemas/CloudSettings"
},
"category": {
"type": "string",
"nullable": true
},
"stepType": {
"type": "string",
"nullable": true
},
"stage": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"hash": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"ModuleInfoFromYamlStatusEnum": {
"enum": [
"NewModule",
"NewVersion",
"Conflict",
"ParseError",
"ProcessRequestError"
],
"type": "string"
},
"ModulePythonInterface": {
"type": "object",
"properties": {
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PythonInterfaceMapping"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PythonInterfaceMapping"
},
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PythonInterfaceMapping"
},
"nullable": true
}
},
"additionalProperties": false
},
"ModuleRunSettingTypes": {
"enum": [
"Released",
"Testing",
"Legacy",
"Preview",
"Integration",
"All",
"Default",
"Full",
"UxIntegration",
"UxFull"
],
"type": "string"
},
"ModuleScope": {
"enum": [
"All",
"Global",
"Workspace",
"Anonymous",
"Step",
"Draft",
"Feed",
"Registry",
"SystemAutoCreated"
],
"type": "string"
},
"ModuleSourceType": {
"enum": [
"Unknown",
"Local",
"GithubFile",
"GithubFolder",
"DevopsArtifactsZip",
"SerializedModuleInfo"
],
"type": "string"
},
"ModuleType": {
"enum": [
"None",
"BatchInferencing"
],
"type": "string"
},
"ModuleUpdateOperationType": {
"enum": [
"SetDefaultVersion",
"EnableModule",
"DisableModule",
"UpdateDisplayName",
"UpdateDescription",
"UpdateTags"
],
"type": "string"
},
"ModuleWorkingMechanism": {
"enum": [
"Normal",
"OutputToDataset"
],
"type": "string"
},
"MpiConfiguration": {
"type": "object",
"properties": {
"processCountPerNode": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"NCrossValidationMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"NCrossValidations": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/NCrossValidationMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"Node": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ToolType"
},
"source": {
"$ref": "#/components/schemas/NodeSource"
},
"inputs": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"tool": {
"type": "string",
"nullable": true
},
"reduce": {
"type": "boolean"
},
"activate": {
"$ref": "#/components/schemas/Activate"
},
"use_variants": {
"type": "boolean"
},
"comment": {
"type": "string",
"nullable": true
},
"api": {
"type": "string",
"nullable": true
},
"provider": {
"type": "string",
"nullable": true
},
"connection": {
"type": "string",
"nullable": true
},
"module": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"NodeCompositionMode": {
"enum": [
"None",
"OnlySequential",
"Full"
],
"type": "string"
},
"NodeInputPort": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"dataTypesIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"isOptional": {
"type": "boolean"
}
},
"additionalProperties": false
},
"NodeLayout": {
"type": "object",
"properties": {
"x": {
"type": "number",
"format": "float"
},
"y": {
"type": "number",
"format": "float"
},
"width": {
"type": "number",
"format": "float"
},
"height": {
"type": "number",
"format": "float"
},
"extendedData": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"NodeOutputPort": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"dataTypeId": {
"type": "string",
"nullable": true
},
"passThroughInputName": {
"type": "string",
"nullable": true
},
"EarlyAvailable": {
"type": "boolean"
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
}
},
"additionalProperties": false
},
"NodePortInterface": {
"type": "object",
"properties": {
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/NodeInputPort"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/NodeOutputPort"
},
"nullable": true
},
"controlOutputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ControlOutput"
},
"nullable": true
}
},
"additionalProperties": false
},
"NodeSource": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"tool": {
"type": "string",
"nullable": true
},
"path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"NodeTelemetryMetaInfo": {
"type": "object",
"properties": {
"pipelineRunId": {
"type": "string",
"nullable": true
},
"nodeId": {
"type": "string",
"nullable": true
},
"versionId": {
"type": "string",
"nullable": true
},
"nodeType": {
"type": "string",
"nullable": true
},
"nodeSource": {
"type": "string",
"nullable": true
},
"isAnonymous": {
"type": "boolean"
},
"isPipelineComponent": {
"type": "boolean"
}
},
"additionalProperties": false
},
"NodeVariant": {
"type": "object",
"properties": {
"variants": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/VariantNode"
},
"description": "This is a dictionary",
"nullable": true
},
"defaultVariantId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Nodes": {
"required": [
"nodes_value_type"
],
"type": "object",
"properties": {
"nodes_value_type": {
"$ref": "#/components/schemas/NodesValueType"
},
"values": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
}
},
"additionalProperties": false
},
"NodesValueType": {
"enum": [
"All",
"Custom"
],
"type": "string"
},
"NoteBookTaskDto": {
"type": "object",
"properties": {
"notebook_path": {
"type": "string",
"nullable": true
},
"base_parameters": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"NotificationSetting": {
"type": "object",
"properties": {
"emails": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"emailOn": {
"type": "array",
"items": {
"$ref": "#/components/schemas/EmailNotificationEnableType"
},
"nullable": true
},
"webhooks": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Webhook"
},
"nullable": true
}
},
"additionalProperties": false
},
"ODataError": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "Gets or sets a language-independent, service-defined error code.\r\nThis code serves as a sub-status for the HTTP error code specified\r\nin the response.",
"nullable": true
},
"message": {
"type": "string",
"description": "Gets or sets a human-readable, language-dependent representation of the error.\r\nThe `Content-Language` header MUST contain the language code from [RFC5646]\r\ncorresponding to the language in which the value for message is written.",
"nullable": true
},
"target": {
"type": "string",
"description": "Gets or sets the target of the particular error\r\n(for example, the name of the property in error).",
"nullable": true
},
"details": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ODataErrorDetail"
},
"description": "Gets or sets additional details about the error.",
"nullable": true
},
"innererror": {
"$ref": "#/components/schemas/ODataInnerError"
}
},
"additionalProperties": false,
"description": "Represents OData v4 error object."
},
"ODataErrorDetail": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "Gets or sets a language-independent, service-defined error code.",
"nullable": true
},
"message": {
"type": "string",
"description": "Gets or sets a human-readable, language-dependent representation of the error.",
"nullable": true
},
"target": {
"type": "string",
"description": "Gets or sets the target of the particular error\r\n(for example, the name of the property in error).",
"nullable": true
}
},
"additionalProperties": false,
"description": "Represents additional error details."
},
"ODataErrorResponse": {
"type": "object",
"properties": {
"error": {
"$ref": "#/components/schemas/ODataError"
}
},
"additionalProperties": false,
"description": "Represents OData v4 compliant error response message."
},
"ODataInnerError": {
"type": "object",
"properties": {
"clientRequestId": {
"type": "string",
"description": "Gets or sets the client provided request ID.",
"nullable": true
},
"serviceRequestId": {
"type": "string",
"description": "Gets or sets the server generated request ID.",
"nullable": true
},
"trace": {
"type": "string",
"description": "Gets or sets the exception stack trace.\r\nDO NOT INCLUDE IT IN PRODUCTION ENVIRONMENT.",
"nullable": true
},
"context": {
"type": "string",
"description": "Gets or sets additional context for the exception.\r\nDO NOT INCLUDE IT IN PRODUCTION ENVIRONMENT.",
"nullable": true
}
},
"additionalProperties": false,
"description": "The contents of this object are service-defined.\r\nUsually this object contains information that will help debug the service\r\nand SHOULD only be used in development environments in order to guard\r\nagainst potential security concerns around information disclosure."
},
"Orientation": {
"enum": [
"Horizontal",
"Vertical"
],
"type": "string"
},
"OutputData": {
"type": "object",
"properties": {
"outputLocation": {
"$ref": "#/components/schemas/ExecutionDataLocation"
},
"mechanism": {
"$ref": "#/components/schemas/OutputMechanism"
},
"additionalOptions": {
"$ref": "#/components/schemas/OutputOptions"
},
"environmentVariableName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"OutputDataBinding": {
"type": "object",
"properties": {
"datastoreId": {
"type": "string",
"nullable": true
},
"pathOnDatastore": {
"type": "string",
"nullable": true
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"uri": {
"$ref": "#/components/schemas/MfeInternalUriReference"
},
"mode": {
"$ref": "#/components/schemas/DataBindingMode"
},
"assetUri": {
"type": "string",
"nullable": true
},
"isAssetJobOutput": {
"type": "boolean",
"nullable": true
},
"jobOutputType": {
"$ref": "#/components/schemas/JobOutputType"
},
"assetName": {
"type": "string",
"nullable": true
},
"assetVersion": {
"type": "string",
"nullable": true
},
"autoDeleteSetting": {
"$ref": "#/components/schemas/AutoDeleteSetting"
}
},
"additionalProperties": false
},
"OutputDatasetLineage": {
"type": "object",
"properties": {
"identifier": {
"$ref": "#/components/schemas/DatasetIdentifier"
},
"outputType": {
"$ref": "#/components/schemas/DatasetOutputType"
},
"outputDetails": {
"$ref": "#/components/schemas/DatasetOutputDetails"
}
},
"additionalProperties": false
},
"OutputDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ValueType"
},
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"isProperty": {
"type": "boolean"
}
},
"additionalProperties": false
},
"OutputMechanism": {
"enum": [
"Upload",
"Mount",
"Hdfs",
"Link",
"Direct"
],
"type": "string"
},
"OutputOptions": {
"type": "object",
"properties": {
"pathOnCompute": {
"type": "string",
"nullable": true
},
"registrationOptions": {
"$ref": "#/components/schemas/RegistrationOptions"
},
"uploadOptions": {
"$ref": "#/components/schemas/UploadOptions"
},
"mountOptions": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"OutputSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"DataStoreNameParameterAssignment": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"DataStoreModeParameterAssignment": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"PathOnComputeParameterAssignment": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"webServicePort": {
"type": "string",
"nullable": true
},
"datasetRegistration": {
"$ref": "#/components/schemas/DatasetRegistration"
},
"datasetOutputOptions": {
"$ref": "#/components/schemas/DatasetOutputOptions"
},
"AssetOutputSettings": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"parameterName": {
"type": "string",
"nullable": true
},
"AssetOutputSettingsParameterName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"OutputSettingSpec": {
"type": "object",
"properties": {
"supportedDataStoreModes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"nullable": true
},
"defaultAssetOutputPath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PaginatedDataInfoList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataInfo"
},
"description": "An array of objects of type DataInfo.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of DataInfos."
},
"PaginatedModelDtoList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ModelDto"
},
"description": "An array of objects of type ModelDto.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of ModelDtos."
},
"PaginatedModuleDtoList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ModuleDto"
},
"description": "An array of objects of type ModuleDto.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of ModuleDtos."
},
"PaginatedPipelineDraftSummaryList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PipelineDraftSummary"
},
"description": "An array of objects of type PipelineDraftSummary.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of PipelineDraftSummarys."
},
"PaginatedPipelineEndpointSummaryList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PipelineEndpointSummary"
},
"description": "An array of objects of type PipelineEndpointSummary.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of PipelineEndpointSummarys."
},
"PaginatedPipelineRunSummaryList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PipelineRunSummary"
},
"description": "An array of objects of type PipelineRunSummary.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of PipelineRunSummarys."
},
"PaginatedPublishedPipelineSummaryList": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PublishedPipelineSummary"
},
"description": "An array of objects of type PublishedPipelineSummary.",
"nullable": true
},
"continuationToken": {
"type": "string",
"description": "The token used in retrieving the next page. If null, there are no additional pages.",
"nullable": true
},
"nextLink": {
"type": "string",
"description": "The link to the next page constructed using the continuationToken. If null, there are no additional pages.",
"nullable": true
}
},
"additionalProperties": false,
"description": "A paginated list of PublishedPipelineSummarys."
},
"ParallelForControlFlowInfo": {
"type": "object",
"properties": {
"parallelForItemsInput": {
"$ref": "#/components/schemas/ParameterAssignment"
}
},
"additionalProperties": false
},
"ParallelTaskConfiguration": {
"type": "object",
"properties": {
"maxRetriesPerWorker": {
"type": "integer",
"format": "int32"
},
"workerCountPerNode": {
"type": "integer",
"format": "int32"
},
"terminalExitCodes": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"configuration": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"Parameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"documentation": {
"type": "string",
"nullable": true
},
"defaultValue": {
"type": "string",
"nullable": true
},
"isOptional": {
"type": "boolean"
},
"minMaxRules": {
"type": "array",
"items": {
"$ref": "#/components/schemas/MinMaxParameterRule"
},
"nullable": true
},
"enumRules": {
"type": "array",
"items": {
"$ref": "#/components/schemas/EnumParameterRule"
},
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ParameterType"
},
"label": {
"type": "string",
"nullable": true
},
"groupNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"argumentName": {
"type": "string",
"nullable": true
},
"uiHint": {
"$ref": "#/components/schemas/UIParameterHint"
}
},
"additionalProperties": false
},
"ParameterAssignment": {
"type": "object",
"properties": {
"valueType": {
"$ref": "#/components/schemas/ParameterValueType"
},
"assignmentsToConcatenate": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"dataPathAssignment": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"dataSetDefinitionValueAssignment": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"name": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ParameterDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
},
"isOptional": {
"type": "boolean"
}
},
"additionalProperties": false
},
"ParameterType": {
"enum": [
"Int",
"Double",
"Bool",
"String",
"Undefined"
],
"type": "string"
},
"ParameterValueType": {
"enum": [
"Literal",
"GraphParameterName",
"Concatenate",
"Input",
"DataPath",
"DataSetDefinition"
],
"type": "string"
},
"PatchFlowRequest": {
"type": "object",
"properties": {
"flowPatchOperationType": {
"$ref": "#/components/schemas/FlowPatchOperationType"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Pipeline": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean"
},
"defaultDatastoreName": {
"type": "string",
"nullable": true
},
"componentJobs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComponentJob"
},
"description": "This is a dictionary",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineInput"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineOutput"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineDraft": {
"type": "object",
"properties": {
"graphDraftId": {
"type": "string",
"nullable": true
},
"sourcePipelineRunId": {
"type": "string",
"nullable": true
},
"latestPipelineRunId": {
"type": "string",
"nullable": true
},
"latestRunExperimentName": {
"type": "string",
"nullable": true
},
"latestRunExperimentId": {
"type": "string",
"nullable": true
},
"isLatestRunExperimentArchived": {
"type": "boolean",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/PipelineStatus"
},
"graphDetail": {
"$ref": "#/components/schemas/PipelineRunGraphDetail"
},
"realTimeEndpointInfo": {
"$ref": "#/components/schemas/RealTimeEndpointInfo"
},
"linkedPipelinesInfo": {
"type": "array",
"items": {
"$ref": "#/components/schemas/LinkedPipelineInfo"
},
"nullable": true
},
"nodesInDraft": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"studioMigrationInfo": {
"$ref": "#/components/schemas/StudioMigrationInfo"
},
"flattenedSubGraphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineSubDraft"
},
"nullable": true
},
"pipelineRunSettingParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameter"
},
"nullable": true
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean"
},
"continueRunOnFailedOptionalInput": {
"type": "boolean"
},
"defaultCompute": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"defaultCloudPriority": {
"$ref": "#/components/schemas/CloudPrioritySetting"
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"pipelineTimeout": {
"type": "integer",
"format": "int32"
},
"identityConfig": {
"$ref": "#/components/schemas/IdentitySetting"
},
"graphComponentsMode": {
"$ref": "#/components/schemas/GraphComponentsMode"
},
"name": {
"type": "string",
"nullable": true
},
"lastEditedBy": {
"type": "string",
"nullable": true
},
"createdBy": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineDraftMode": {
"enum": [
"None",
"Normal",
"Custom"
],
"type": "string"
},
"PipelineDraftStepDetails": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"isReused": {
"type": "boolean",
"nullable": true
},
"reusedRunId": {
"type": "string",
"nullable": true
},
"reusedPipelineRunId": {
"type": "string",
"nullable": true
},
"logs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputLog": {
"type": "string",
"nullable": true
},
"runConfiguration": {
"$ref": "#/components/schemas/RunConfiguration"
},
"outputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"portOutputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PortOutputInfo"
},
"description": "This is a dictionary",
"nullable": true
},
"isExperimentArchived": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineDraftSummary": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"lastEditedBy": {
"type": "string",
"nullable": true
},
"createdBy": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineEndpoint": {
"type": "object",
"properties": {
"defaultVersion": {
"type": "string",
"nullable": true
},
"defaultPipelineId": {
"type": "string",
"nullable": true
},
"defaultGraphId": {
"type": "string",
"nullable": true
},
"restEndpoint": {
"type": "string",
"nullable": true
},
"publishedDate": {
"type": "string",
"format": "date-time"
},
"publishedBy": {
"type": "string",
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignment": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"defaultPipelineName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"updatedBy": {
"type": "string",
"nullable": true
},
"swaggerUrl": {
"type": "string",
"nullable": true
},
"lastRunTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastRunStatus": {
"$ref": "#/components/schemas/PipelineRunStatusCode"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineEndpointSummary": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"updatedBy": {
"type": "string",
"nullable": true
},
"swaggerUrl": {
"type": "string",
"nullable": true
},
"lastRunTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastRunStatus": {
"$ref": "#/components/schemas/PipelineRunStatusCode"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineGraph": {
"type": "object",
"properties": {
"graphModuleDtos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ModuleDto"
},
"nullable": true
},
"graphDataSources": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataInfo"
},
"nullable": true
},
"graphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineGraph"
},
"description": "This is a dictionary",
"nullable": true
},
"graphDrafts": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineGraph"
},
"description": "This is a dictionary",
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"referencedNodeId": {
"type": "string",
"nullable": true
},
"pipelineRunSettingParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameter"
},
"nullable": true
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"realTimeEndpointInfo": {
"$ref": "#/components/schemas/RealTimeEndpointInfo"
},
"nodeTelemetryMetaInfos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/NodeTelemetryMetaInfo"
},
"nullable": true
},
"graphComponentsMode": {
"$ref": "#/components/schemas/GraphComponentsMode"
},
"moduleNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNode"
},
"nullable": true
},
"datasetNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphDatasetNode"
},
"nullable": true
},
"subGraphNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphReferenceNode"
},
"nullable": true
},
"controlReferenceNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphControlReferenceNode"
},
"nullable": true
},
"controlNodes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphControlNode"
},
"nullable": true
},
"edges": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphEdge"
},
"nullable": true
},
"entityInterface": {
"$ref": "#/components/schemas/EntityInterface"
},
"graphLayout": {
"$ref": "#/components/schemas/GraphLayout"
},
"createdBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"lastUpdatedBy": {
"$ref": "#/components/schemas/CreatedBy"
},
"defaultCompute": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"defaultCloudPriority": {
"$ref": "#/components/schemas/CloudPrioritySetting"
},
"extendedProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"parentSubGraphModuleIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineInput": {
"type": "object",
"properties": {
"data": {
"$ref": "#/components/schemas/InputData"
}
},
"additionalProperties": false
},
"PipelineJob": {
"type": "object",
"properties": {
"jobType": {
"$ref": "#/components/schemas/JobType"
},
"pipelineJobType": {
"$ref": "#/components/schemas/MfeInternalPipelineType"
},
"pipeline": {
"$ref": "#/components/schemas/Pipeline"
},
"computeId": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"settings": {
"nullable": true
},
"componentJobs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MfeInternalV20211001ComponentJob"
},
"description": "This is a dictionary",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobInput"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobOutput"
},
"description": "This is a dictionary",
"nullable": true
},
"bindings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Binding"
},
"nullable": true
},
"jobs": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"inputBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDataBinding"
},
"description": "This is a dictionary",
"nullable": true
},
"outputBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputDataBinding"
},
"description": "This is a dictionary",
"nullable": true
},
"sourceJobId": {
"type": "string",
"nullable": true
},
"provisioningState": {
"$ref": "#/components/schemas/JobProvisioningState"
},
"parentJobName": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"interactionEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobEndpoint"
},
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/MfeInternalIdentityConfiguration"
},
"compute": {
"$ref": "#/components/schemas/ComputeConfiguration"
},
"priority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"output": {
"$ref": "#/components/schemas/JobOutputArtifacts"
},
"isArchived": {
"type": "boolean"
},
"schedule": {
"$ref": "#/components/schemas/ScheduleBase"
},
"componentId": {
"type": "string",
"nullable": true
},
"notificationSetting": {
"$ref": "#/components/schemas/NotificationSetting"
},
"secretsConfiguration": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MfeInternalSecretConfiguration"
},
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineJobRuntimeBasicSettings": {
"type": "object",
"properties": {
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"pipelineJobName": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"triggerTimeString": {
"type": "string",
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineJobScheduleDto": {
"type": "object",
"properties": {
"systemData": {
"$ref": "#/components/schemas/SystemData"
},
"name": {
"type": "string",
"nullable": true
},
"pipelineJobName": {
"type": "string",
"nullable": true
},
"pipelineJobRuntimeSettings": {
"$ref": "#/components/schemas/PipelineJobRuntimeBasicSettings"
},
"displayName": {
"type": "string",
"nullable": true
},
"triggerType": {
"$ref": "#/components/schemas/TriggerType"
},
"recurrence": {
"$ref": "#/components/schemas/Recurrence"
},
"cron": {
"$ref": "#/components/schemas/Cron"
},
"status": {
"$ref": "#/components/schemas/ScheduleStatus"
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineOutput": {
"type": "object",
"properties": {
"data": {
"$ref": "#/components/schemas/MfeInternalOutputData"
}
},
"additionalProperties": false
},
"PipelineRun": {
"type": "object",
"properties": {
"pipelineId": {
"type": "string",
"nullable": true
},
"runSource": {
"type": "string",
"nullable": true
},
"runType": {
"$ref": "#/components/schemas/RunType"
},
"parameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignment": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"totalSteps": {
"type": "integer",
"format": "int32"
},
"logs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"continueRunOnFailedOptionalInput": {
"type": "boolean"
},
"defaultCompute": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDatastore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"defaultCloudPriority": {
"$ref": "#/components/schemas/CloudPrioritySetting"
},
"pipelineTimeoutSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean"
},
"identityConfig": {
"$ref": "#/components/schemas/IdentitySetting"
},
"description": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"runNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"statusCode": {
"$ref": "#/components/schemas/PipelineStatusCode"
},
"runStatus": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"graphId": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"isExperimentArchived": {
"type": "boolean",
"nullable": true
},
"submittedBy": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"stepTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"aetherStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"aetherEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"uniqueChildRunComputeTargets": {
"uniqueItems": true,
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineRunGraphDetail": {
"type": "object",
"properties": {
"graph": {
"$ref": "#/components/schemas/PipelineGraph"
},
"graphNodesStatus": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/GraphNodeStatusInfo"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineRunGraphStatus": {
"type": "object",
"properties": {
"status": {
"$ref": "#/components/schemas/PipelineStatus"
},
"graphNodesStatus": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/GraphNodeStatusInfo"
},
"description": "This is a dictionary",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"isExperimentArchived": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineRunProfile": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"nodeId": {
"type": "string",
"nullable": true
},
"runUrl": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/PipelineRunStatus"
},
"createTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"startTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"endTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"profilingTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"stepRunsProfile": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StepRunProfile"
},
"nullable": true
},
"subPipelineRunProfile": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PipelineRunProfile"
},
"nullable": true
}
},
"additionalProperties": false
},
"PipelineRunStatus": {
"type": "object",
"properties": {
"statusCode": {
"$ref": "#/components/schemas/PipelineRunStatusCode"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"creationTime": {
"type": "string",
"format": "date-time"
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineRunStatusCode": {
"enum": [
"NotStarted",
"Running",
"Failed",
"Finished",
"Canceled",
"Queued",
"CancelRequested"
],
"type": "string"
},
"PipelineRunStepDetails": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"isReused": {
"type": "boolean",
"nullable": true
},
"logs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"snapshotInfo": {
"$ref": "#/components/schemas/SnapshotInfo"
},
"inputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/DatasetLineage"
},
"nullable": true
},
"outputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputDatasetLineage"
},
"nullable": true
}
},
"additionalProperties": false
},
"PipelineRunSummary": {
"type": "object",
"properties": {
"description": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"runNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"statusCode": {
"$ref": "#/components/schemas/PipelineStatusCode"
},
"runStatus": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"graphId": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"isExperimentArchived": {
"type": "boolean",
"nullable": true
},
"submittedBy": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"stepTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"aetherStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"aetherEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryStartTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runHistoryEndTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"uniqueChildRunComputeTargets": {
"uniqueItems": true,
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineStatus": {
"type": "object",
"properties": {
"statusCode": {
"$ref": "#/components/schemas/PipelineStatusCode"
},
"runStatus": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"isTerminalState": {
"type": "boolean",
"readOnly": true
}
},
"additionalProperties": false
},
"PipelineStatusCode": {
"enum": [
"NotStarted",
"InDraft",
"Preparing",
"Running",
"Failed",
"Finished",
"Canceled",
"Throttled",
"Unknown"
],
"type": "string"
},
"PipelineStepRun": {
"type": "object",
"properties": {
"stepName": {
"type": "string",
"nullable": true
},
"runNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runStatus": {
"$ref": "#/components/schemas/RunStatus"
},
"computeTarget": {
"type": "string",
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
},
"runType": {
"type": "string",
"nullable": true
},
"stepType": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"isReused": {
"type": "boolean",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineStepRunOutputs": {
"type": "object",
"properties": {
"outputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"portOutputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PortOutputInfo"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"PipelineSubDraft": {
"type": "object",
"properties": {
"parentGraphDraftId": {
"type": "string",
"nullable": true
},
"parentNodeId": {
"type": "string",
"nullable": true
},
"graphDetail": {
"$ref": "#/components/schemas/PipelineRunGraphDetail"
},
"moduleDto": {
"$ref": "#/components/schemas/ModuleDto"
},
"name": {
"type": "string",
"nullable": true
},
"lastEditedBy": {
"type": "string",
"nullable": true
},
"createdBy": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PipelineType": {
"enum": [
"TrainingPipeline",
"RealTimeInferencePipeline",
"BatchInferencePipeline",
"Unknown"
],
"type": "string"
},
"PolicyValidationResponse": {
"type": "object",
"properties": {
"errorResponse": {
"$ref": "#/components/schemas/ErrorResponse"
},
"nextActionIntervalInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"actionType": {
"$ref": "#/components/schemas/ActionType"
}
},
"additionalProperties": false
},
"PortAction": {
"enum": [
"Promote",
"ViewInDataStore",
"Visualize",
"GetSchema",
"CreateInferenceGraph",
"RegisterModel",
"PromoteAsTabular"
],
"type": "string"
},
"PortInfo": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"graphPortName": {
"type": "string",
"nullable": true
},
"isParameter": {
"type": "boolean"
},
"webServicePort": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PortOutputInfo": {
"type": "object",
"properties": {
"containerUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"relativePath": {
"type": "string",
"nullable": true
},
"previewParams": {
"type": "string",
"nullable": true
},
"modelOutputPath": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataReferenceType": {
"$ref": "#/components/schemas/DataReferenceType"
},
"isFile": {
"type": "boolean"
},
"supportedActions": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PortAction"
},
"nullable": true
}
},
"additionalProperties": false
},
"PrimaryMetrics": {
"enum": [
"AUCWeighted",
"Accuracy",
"NormMacroRecall",
"AveragePrecisionScoreWeighted",
"PrecisionScoreWeighted",
"SpearmanCorrelation",
"NormalizedRootMeanSquaredError",
"R2Score",
"NormalizedMeanAbsoluteError",
"NormalizedRootMeanSquaredLogError",
"MeanAveragePrecision",
"Iou"
],
"type": "string"
},
"PriorityConfig": {
"type": "object",
"properties": {
"jobPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"isPreemptible": {
"type": "boolean",
"nullable": true
},
"nodeCountSet": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"scaleInterval": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"PriorityConfiguration": {
"type": "object",
"properties": {
"cloudPriority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"stringTypePriority": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PromoteDataSetRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"moduleNodeId": {
"type": "string",
"nullable": true
},
"stepRunId": {
"type": "string",
"nullable": true
},
"outputPortName": {
"type": "string",
"nullable": true
},
"modelOutputPath": {
"type": "string",
"nullable": true
},
"dataTypeId": {
"type": "string",
"nullable": true
},
"datasetType": {
"type": "string",
"nullable": true
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"outputRelativePath": {
"type": "string",
"nullable": true
},
"pipelineRunId": {
"type": "string",
"nullable": true
},
"rootPipelineRunId": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PromptflowEngineType": {
"enum": [
"FastEngine",
"ScalableEngine"
],
"type": "string"
},
"ProviderEntity": {
"type": "object",
"properties": {
"provider": {
"type": "string",
"nullable": true
},
"module": {
"type": "string",
"nullable": true
},
"connection_type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionType"
},
"nullable": true
},
"apis": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ApiAndParameters"
},
"nullable": true
}
},
"additionalProperties": false
},
"ProvisioningState": {
"enum": [
"Unknown",
"Updating",
"Creating",
"Deleting",
"Accepted",
"Succeeded",
"Failed",
"Canceled"
],
"type": "string"
},
"PublishedPipeline": {
"type": "object",
"properties": {
"totalRunSteps": {
"type": "integer",
"format": "int32"
},
"totalRuns": {
"type": "integer",
"format": "int32"
},
"parameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignment": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"restEndpoint": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"graphId": {
"type": "string",
"nullable": true
},
"publishedDate": {
"type": "string",
"format": "date-time"
},
"lastRunTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastRunStatus": {
"$ref": "#/components/schemas/PipelineRunStatusCode"
},
"publishedBy": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"isDefault": {
"type": "boolean",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PublishedPipelineSummary": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"graphId": {
"type": "string",
"nullable": true
},
"publishedDate": {
"type": "string",
"format": "date-time"
},
"lastRunTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastRunStatus": {
"$ref": "#/components/schemas/PipelineRunStatusCode"
},
"publishedBy": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"isDefault": {
"type": "boolean",
"nullable": true
},
"entityStatus": {
"$ref": "#/components/schemas/EntityStatus"
},
"id": {
"type": "string",
"nullable": true
},
"etag": {
"type": "string",
"nullable": true
},
"createdDate": {
"type": "string",
"format": "date-time"
},
"lastModifiedDate": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"PyTorchConfiguration": {
"type": "object",
"properties": {
"communicationBackend": {
"type": "string",
"nullable": true
},
"processCount": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"PythonInterfaceMapping": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"nameInYaml": {
"type": "string",
"nullable": true
},
"argumentName": {
"type": "string",
"nullable": true
},
"commandLineOption": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PythonPyPiOrRCranLibraryDto": {
"type": "object",
"properties": {
"package": {
"type": "string",
"nullable": true
},
"repo": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"PythonSection": {
"type": "object",
"properties": {
"interpreterPath": {
"type": "string",
"nullable": true
},
"userManagedDependencies": {
"type": "boolean"
},
"condaDependencies": {
"nullable": true
},
"baseCondaEnvironment": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"QueueingInfo": {
"type": "object",
"properties": {
"code": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
},
"lastRefreshTimestamp": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"RCranPackage": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"repository": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RGitHubPackage": {
"type": "object",
"properties": {
"repository": {
"type": "string",
"nullable": true
},
"authToken": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RSection": {
"type": "object",
"properties": {
"rVersion": {
"type": "string",
"nullable": true
},
"userManaged": {
"type": "boolean"
},
"rscriptPath": {
"type": "string",
"nullable": true
},
"snapshotDate": {
"type": "string",
"nullable": true
},
"cranPackages": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RCranPackage"
},
"nullable": true
},
"gitHubPackages": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RGitHubPackage"
},
"nullable": true
},
"customUrlPackages": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"bioConductorPackages": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"RawComponentDto": {
"type": "object",
"properties": {
"componentSchema": {
"type": "string",
"nullable": true
},
"isAnonymous": {
"type": "boolean"
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ComponentType"
},
"componentTypeVersion": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"isDeterministic": {
"type": "boolean"
},
"successfulReturnCode": {
"type": "string",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComponentInput"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ComponentOutput"
},
"description": "This is a dictionary",
"nullable": true
},
"command": {
"type": "string",
"nullable": true
},
"environmentName": {
"type": "string",
"nullable": true
},
"environmentVersion": {
"type": "string",
"nullable": true
},
"snapshotId": {
"type": "string",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"lastModifiedBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdDate": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastModifiedDate": {
"type": "string",
"format": "date-time",
"nullable": true
},
"componentInternalId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RayConfiguration": {
"type": "object",
"properties": {
"port": {
"type": "integer",
"format": "int32",
"nullable": true
},
"address": {
"type": "string",
"nullable": true
},
"includeDashboard": {
"type": "boolean",
"nullable": true
},
"dashboardPort": {
"type": "integer",
"format": "int32",
"nullable": true
},
"headNodeAdditionalArgs": {
"type": "string",
"nullable": true
},
"workerNodeAdditionalArgs": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RealTimeEndpoint": {
"type": "object",
"properties": {
"createdBy": {
"type": "string",
"nullable": true
},
"kvTags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"state": {
"$ref": "#/components/schemas/WebServiceState"
},
"error": {
"$ref": "#/components/schemas/ModelManagementErrorResponse"
},
"computeType": {
"$ref": "#/components/schemas/ComputeEnvironmentType"
},
"imageId": {
"type": "string",
"nullable": true
},
"cpu": {
"type": "number",
"format": "double",
"nullable": true
},
"memoryInGB": {
"type": "number",
"format": "double",
"nullable": true
},
"maxConcurrentRequestsPerContainer": {
"type": "integer",
"format": "int32",
"nullable": true
},
"numReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
},
"eventHubEnabled": {
"type": "boolean",
"nullable": true
},
"storageEnabled": {
"type": "boolean",
"nullable": true
},
"appInsightsEnabled": {
"type": "boolean",
"nullable": true
},
"autoScaleEnabled": {
"type": "boolean",
"nullable": true
},
"minReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxReplicas": {
"type": "integer",
"format": "int32",
"nullable": true
},
"targetUtilization": {
"type": "integer",
"format": "int32",
"nullable": true
},
"refreshPeriodInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"scoringUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"deploymentStatus": {
"$ref": "#/components/schemas/AKSReplicaStatus"
},
"scoringTimeoutMs": {
"type": "integer",
"format": "int32",
"nullable": true
},
"authEnabled": {
"type": "boolean",
"nullable": true
},
"aadAuthEnabled": {
"type": "boolean",
"nullable": true
},
"region": {
"type": "string",
"nullable": true
},
"primaryKey": {
"type": "string",
"nullable": true
},
"secondaryKey": {
"type": "string",
"nullable": true
},
"swaggerUri": {
"type": "string",
"format": "uri",
"nullable": true
},
"linkedPipelineDraftId": {
"type": "string",
"nullable": true
},
"linkedPipelineRunId": {
"type": "string",
"nullable": true
},
"warning": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"createdTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"updatedTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
},
"updatedBy": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RealTimeEndpointInfo": {
"type": "object",
"properties": {
"webServiceInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WebServicePort"
},
"nullable": true
},
"webServiceOutputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WebServicePort"
},
"nullable": true
},
"deploymentsInfo": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DeploymentInfo"
},
"nullable": true
}
},
"additionalProperties": false
},
"RealTimeEndpointInternalStepCode": {
"enum": [
"AboutToDeploy",
"WaitAksComputeReady",
"RegisterModels",
"CreateServiceFromModels",
"UpdateServiceFromModels",
"WaitServiceCreating",
"FetchServiceRelatedInfo",
"TestWithSampleData",
"AboutToDelete",
"DeleteDeployment",
"DeleteAsset",
"DeleteImage",
"DeleteModel",
"DeleteServiceRecord"
],
"type": "string"
},
"RealTimeEndpointOpCode": {
"enum": [
"Create",
"Update",
"Delete"
],
"type": "string"
},
"RealTimeEndpointOpStatusCode": {
"enum": [
"Ongoing",
"Succeeded",
"Failed",
"SucceededWithWarning"
],
"type": "string"
},
"RealTimeEndpointStatus": {
"type": "object",
"properties": {
"lastOperation": {
"$ref": "#/components/schemas/RealTimeEndpointOpCode"
},
"lastOperationStatus": {
"$ref": "#/components/schemas/RealTimeEndpointOpStatusCode"
},
"internalStep": {
"$ref": "#/components/schemas/RealTimeEndpointInternalStepCode"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"deploymentState": {
"type": "string",
"nullable": true
},
"serviceId": {
"type": "string",
"nullable": true
},
"linkedPipelineDraftId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RealTimeEndpointSummary": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"createdTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"updatedTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"computeType": {
"$ref": "#/components/schemas/ComputeEnvironmentType"
},
"computeName": {
"type": "string",
"nullable": true
},
"updatedBy": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RealTimeEndpointTestRequest": {
"type": "object",
"properties": {
"endPoint": {
"type": "string",
"nullable": true
},
"authKey": {
"type": "string",
"nullable": true
},
"payload": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Recurrence": {
"type": "object",
"properties": {
"frequency": {
"$ref": "#/components/schemas/Frequency"
},
"interval": {
"type": "integer",
"format": "int32"
},
"schedule": {
"$ref": "#/components/schemas/RecurrenceSchedule"
},
"endTime": {
"type": "string",
"nullable": true
},
"startTime": {
"type": "string",
"nullable": true
},
"timeZone": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RecurrenceFrequency": {
"enum": [
"Minute",
"Hour",
"Day",
"Week",
"Month"
],
"type": "string"
},
"RecurrencePattern": {
"type": "object",
"properties": {
"hours": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"minutes": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"weekdays": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Weekday"
},
"nullable": true
}
},
"additionalProperties": false
},
"RecurrenceSchedule": {
"type": "object",
"properties": {
"hours": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"minutes": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
},
"weekDays": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WeekDays"
},
"nullable": true
},
"monthDays": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
}
},
"additionalProperties": false
},
"RegenerateServiceKeysRequest": {
"type": "object",
"properties": {
"keyType": {
"$ref": "#/components/schemas/KeyType"
},
"keyValue": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RegisterComponentMetaInfo": {
"type": "object",
"properties": {
"amlModuleName": {
"type": "string",
"nullable": true
},
"nameOnlyDisplayInfo": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"moduleVersionId": {
"type": "string",
"nullable": true
},
"snapshotId": {
"type": "string",
"nullable": true
},
"componentRegistrationType": {
"$ref": "#/components/schemas/ComponentRegistrationTypeEnum"
},
"moduleEntityFromYaml": {
"$ref": "#/components/schemas/ModuleEntity"
},
"setAsDefaultVersion": {
"type": "boolean"
},
"dataTypesFromYaml": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataTypeCreationInfo"
},
"nullable": true
},
"dataTypeMechanism": {
"$ref": "#/components/schemas/DataTypeMechanism"
},
"identifierHash": {
"type": "string",
"nullable": true
},
"identifierHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
},
"contentHash": {
"type": "string",
"nullable": true
},
"extraHash": {
"type": "string",
"nullable": true
},
"extraHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
},
"registration": {
"type": "boolean",
"nullable": true
},
"validateOnly": {
"type": "boolean"
},
"skipWorkspaceRelatedCheck": {
"type": "boolean"
},
"intellectualPropertyProtectedWorkspaceComponentRegistrationAllowedPublisher": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"systemManagedRegistration": {
"type": "boolean"
},
"allowDupNameBetweenInputAndOuputPort": {
"type": "boolean"
},
"moduleSource": {
"type": "string",
"nullable": true
},
"moduleScope": {
"type": "string",
"nullable": true
},
"moduleAdditionalIncludesCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"moduleOSType": {
"type": "string",
"nullable": true
},
"moduleCodegenBy": {
"type": "string",
"nullable": true
},
"moduleClientSource": {
"type": "string",
"nullable": true
},
"moduleIsBuiltin": {
"type": "boolean"
},
"moduleRegisterEventExtensionFields": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"RegisterRegistryComponentMetaInfo": {
"type": "object",
"properties": {
"registryName": {
"type": "string",
"nullable": true
},
"intellectualPropertyPublisherInformation": {
"$ref": "#/components/schemas/IntellectualPropertyPublisherInformation"
},
"blobReferenceData": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/RegistryBlobReferenceData"
},
"description": "This is a dictionary",
"nullable": true
},
"amlModuleName": {
"type": "string",
"nullable": true
},
"nameOnlyDisplayInfo": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"moduleVersionId": {
"type": "string",
"nullable": true
},
"snapshotId": {
"type": "string",
"nullable": true
},
"componentRegistrationType": {
"$ref": "#/components/schemas/ComponentRegistrationTypeEnum"
},
"moduleEntityFromYaml": {
"$ref": "#/components/schemas/ModuleEntity"
},
"setAsDefaultVersion": {
"type": "boolean"
},
"dataTypesFromYaml": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DataTypeCreationInfo"
},
"nullable": true
},
"dataTypeMechanism": {
"$ref": "#/components/schemas/DataTypeMechanism"
},
"identifierHash": {
"type": "string",
"nullable": true
},
"identifierHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
},
"contentHash": {
"type": "string",
"nullable": true
},
"extraHash": {
"type": "string",
"nullable": true
},
"extraHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
},
"registration": {
"type": "boolean",
"nullable": true
},
"validateOnly": {
"type": "boolean"
},
"skipWorkspaceRelatedCheck": {
"type": "boolean"
},
"intellectualPropertyProtectedWorkspaceComponentRegistrationAllowedPublisher": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"systemManagedRegistration": {
"type": "boolean"
},
"allowDupNameBetweenInputAndOuputPort": {
"type": "boolean"
},
"moduleSource": {
"type": "string",
"nullable": true
},
"moduleScope": {
"type": "string",
"nullable": true
},
"moduleAdditionalIncludesCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"moduleOSType": {
"type": "string",
"nullable": true
},
"moduleCodegenBy": {
"type": "string",
"nullable": true
},
"moduleClientSource": {
"type": "string",
"nullable": true
},
"moduleIsBuiltin": {
"type": "boolean"
},
"moduleRegisterEventExtensionFields": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"RegisteredDataSetReference": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RegistrationOptions": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"datasetRegistrationOptions": {
"$ref": "#/components/schemas/DatasetRegistrationOptions"
}
},
"additionalProperties": false
},
"RegistryBlobReferenceData": {
"type": "object",
"properties": {
"dataReferenceId": {
"type": "string",
"nullable": true
},
"data": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RegistryIdentity": {
"type": "object",
"properties": {
"resourceId": {
"type": "string",
"nullable": true
},
"clientId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Relationship": {
"type": "object",
"properties": {
"relationType": {
"type": "string",
"nullable": true
},
"targetEntityId": {
"type": "string",
"nullable": true
},
"assetId": {
"type": "string",
"nullable": true
},
"entityType": {
"type": "string",
"nullable": true,
"readOnly": true
},
"direction": {
"type": "string",
"nullable": true
},
"entityContainerId": {
"type": "string",
"nullable": true,
"readOnly": true
}
}
},
"RemoteDockerComputeInfo": {
"type": "object",
"properties": {
"address": {
"type": "string",
"nullable": true
},
"username": {
"type": "string",
"nullable": true
},
"password": {
"type": "string",
"nullable": true
},
"privateKey": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ResourceConfig": {
"type": "object",
"properties": {
"gpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"cpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"memoryRequestInGB": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"ResourceConfiguration": {
"type": "object",
"properties": {
"gpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"cpuCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"memoryRequestInGB": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"ResourcesSetting": {
"type": "object",
"properties": {
"instanceSize": {
"type": "string",
"nullable": true
},
"sparkVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RetrieveToolFuncResultRequest": {
"type": "object",
"properties": {
"func_path": {
"type": "string",
"nullable": true
},
"func_kwargs": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"func_call_scenario": {
"$ref": "#/components/schemas/ToolFuncCallScenario"
}
},
"additionalProperties": false
},
"RetryConfiguration": {
"type": "object",
"properties": {
"maxRetryCount": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"RootError": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "The service-defined error code. Supported error codes: ServiceError, UserError, ValidationError, AzureStorageError, TransientError, RequestThrottled.",
"nullable": true
},
"severity": {
"type": "integer",
"description": "The Severity of error",
"format": "int32",
"nullable": true
},
"message": {
"type": "string",
"description": "A human-readable representation of the error.",
"nullable": true
},
"messageFormat": {
"type": "string",
"description": "An unformatted version of the message with no variable substitution.",
"nullable": true
},
"messageParameters": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"description": "Value substitutions corresponding to the contents of MessageFormat.",
"nullable": true
},
"referenceCode": {
"type": "string",
"description": "This code can optionally be set by the system generating the error.\r\nIt should be used to classify the problem and identify the module and code area where the failure occured.",
"nullable": true
},
"detailsUri": {
"type": "string",
"description": "A URI which points to more details about the context of the error.",
"format": "uri",
"nullable": true
},
"target": {
"type": "string",
"description": "The target of the error (e.g., the name of the property in error).",
"nullable": true
},
"details": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RootError"
},
"description": "The related errors that occurred during the request.",
"nullable": true
},
"innerError": {
"$ref": "#/components/schemas/InnerErrorResponse"
},
"additionalInfo": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ErrorAdditionalInfo"
},
"description": "The error additional info.",
"nullable": true
}
},
"additionalProperties": false,
"description": "The root error."
},
"RunAnnotations": {
"type": "object",
"properties": {
"displayName": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"primaryMetricName": {
"type": "string",
"nullable": true
},
"estimatedCost": {
"type": "number",
"format": "double",
"nullable": true
},
"primaryMetricSummary": {
"$ref": "#/components/schemas/RunIndexMetricSummary"
},
"metrics": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/RunIndexMetricSummarySystemObject"
},
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"settings": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"modifiedTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"retainForLifetimeOfWorkspace": {
"type": "boolean",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/IndexedErrorResponse"
},
"resourceMetricSummary": {
"$ref": "#/components/schemas/RunIndexResourceMetricSummary"
},
"jobCost": {
"$ref": "#/components/schemas/JobCost"
},
"computeDuration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"computeDurationMilliseconds": {
"type": "number",
"format": "double",
"nullable": true
},
"effectiveStartTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"archived": {
"type": "boolean"
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
}
},
"RunCommandsCommandResult": {
"type": "object",
"properties": {
"command": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"exit_code": {
"type": "integer",
"format": "int32"
},
"stdout": {
"type": "string",
"nullable": true
},
"stderr": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunConfiguration": {
"type": "object",
"properties": {
"script": {
"type": "string",
"nullable": true
},
"scriptType": {
"$ref": "#/components/schemas/ScriptType"
},
"command": {
"type": "string",
"nullable": true
},
"useAbsolutePath": {
"type": "boolean"
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"framework": {
"$ref": "#/components/schemas/Framework"
},
"communicator": {
"$ref": "#/components/schemas/Communicator"
},
"target": {
"type": "string",
"nullable": true
},
"autoClusterComputeSpecification": {
"$ref": "#/components/schemas/AutoClusterComputeSpecification"
},
"dataReferences": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataReferenceConfiguration"
},
"nullable": true
},
"data": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Data"
},
"nullable": true
},
"inputAssets": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputAsset"
},
"nullable": true
},
"outputData": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputData"
},
"nullable": true
},
"datacaches": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DatacacheConfiguration"
},
"nullable": true
},
"jobName": {
"type": "string",
"nullable": true
},
"maxRunDurationSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"nodeCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxNodeCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"instanceTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"priority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"credentialPassthrough": {
"type": "boolean"
},
"identity": {
"$ref": "#/components/schemas/IdentityConfiguration"
},
"environment": {
"$ref": "#/components/schemas/EnvironmentDefinition"
},
"history": {
"$ref": "#/components/schemas/HistoryConfiguration"
},
"spark": {
"$ref": "#/components/schemas/SparkConfiguration"
},
"parallelTask": {
"$ref": "#/components/schemas/ParallelTaskConfiguration"
},
"tensorflow": {
"$ref": "#/components/schemas/TensorflowConfiguration"
},
"mpi": {
"$ref": "#/components/schemas/MpiConfiguration"
},
"pyTorch": {
"$ref": "#/components/schemas/PyTorchConfiguration"
},
"ray": {
"$ref": "#/components/schemas/RayConfiguration"
},
"hdi": {
"$ref": "#/components/schemas/HdiConfiguration"
},
"docker": {
"$ref": "#/components/schemas/DockerConfiguration"
},
"commandReturnCodeConfig": {
"$ref": "#/components/schemas/CommandReturnCodeConfig"
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"applicationEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ApplicationEndpointConfiguration"
},
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterDefinition"
},
"nullable": true
},
"autologgerSettings": {
"$ref": "#/components/schemas/AutologgerSettings"
},
"dataBricks": {
"$ref": "#/components/schemas/DatabricksConfiguration"
},
"trainingDiagnosticConfig": {
"$ref": "#/components/schemas/TrainingDiagnosticConfiguration"
},
"secretsConfiguration": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/SecretConfiguration"
},
"nullable": true
}
},
"additionalProperties": false
},
"RunDatasetReference": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunDefinition": {
"type": "object",
"properties": {
"configuration": {
"$ref": "#/components/schemas/RunConfiguration"
},
"snapshotId": {
"type": "string",
"format": "uuid",
"nullable": true
},
"snapshots": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Snapshot"
},
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"runType": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"environmentAssetId": {
"type": "string",
"nullable": true
},
"primaryMetricName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"cancelReason": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"RunDetailsDto": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"runUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"parentRunUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"rootRunUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"dataContainerId": {
"type": "string",
"nullable": true
},
"createdTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"startTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
},
"warnings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunDetailsWarningDto"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"services": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/EndpointSetting"
},
"description": "This is a dictionary",
"nullable": true
},
"inputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/DatasetLineage"
},
"nullable": true
},
"outputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputDatasetLineage"
},
"nullable": true
},
"runDefinition": {
"nullable": true
},
"logFiles": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"jobCost": {
"$ref": "#/components/schemas/JobCost"
},
"revision": {
"type": "integer",
"format": "int64",
"nullable": true
},
"runTypeV2": {
"$ref": "#/components/schemas/RunTypeV2"
},
"settings": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"computeRequest": {
"$ref": "#/components/schemas/ComputeRequest"
},
"compute": {
"$ref": "#/components/schemas/Compute"
},
"createdBy": {
"$ref": "#/components/schemas/User"
},
"computeDuration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"effectiveStartTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"runNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"rootRunId": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"userId": {
"type": "string",
"nullable": true
},
"statusRevision": {
"type": "integer",
"format": "int64",
"nullable": true
},
"currentComputeTime": {
"type": "string",
"format": "date-span",
"nullable": true
},
"lastStartTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastModifiedBy": {
"$ref": "#/components/schemas/User"
},
"lastModifiedUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"duration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TypedAssetReference"
},
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TypedAssetReference"
},
"nullable": true
},
"currentAttemptId": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"RunDetailsWarningDto": {
"type": "object",
"properties": {
"source": {
"type": "string",
"nullable": true
},
"message": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunDisplayNameGenerationType": {
"enum": [
"AutoAppend",
"UserProvidedMacro"
],
"type": "string"
},
"RunDto": {
"type": "object",
"properties": {
"runNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"rootRunId": {
"type": "string",
"nullable": true
},
"createdUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"createdBy": {
"$ref": "#/components/schemas/User"
},
"userId": {
"type": "string",
"nullable": true
},
"token": {
"type": "string",
"nullable": true
},
"tokenExpiryTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
},
"warnings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunDetailsWarningDto"
},
"nullable": true
},
"revision": {
"type": "integer",
"format": "int64",
"nullable": true
},
"statusRevision": {
"type": "integer",
"format": "int64",
"nullable": true
},
"runUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"parentRunUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"rootRunUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"lastStartTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"currentComputeTime": {
"type": "string",
"format": "date-span",
"nullable": true
},
"computeDuration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"effectiveStartTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastModifiedBy": {
"$ref": "#/components/schemas/User"
},
"lastModifiedUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"duration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"cancelationReason": {
"type": "string",
"nullable": true
},
"currentAttemptId": {
"type": "integer",
"format": "int32",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"status": {
"type": "string",
"nullable": true
},
"startTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTimeUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"scheduleId": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"dataContainerId": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"hidden": {
"type": "boolean",
"nullable": true
},
"runType": {
"type": "string",
"nullable": true
},
"runTypeV2": {
"$ref": "#/components/schemas/RunTypeV2"
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"parameters": {
"type": "object",
"additionalProperties": {
"nullable": true
},
"nullable": true
},
"actionUris": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"scriptName": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
},
"uniqueChildRunComputeTargets": {
"uniqueItems": true,
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"settings": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"services": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/EndpointSetting"
},
"nullable": true
},
"inputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/DatasetLineage"
},
"nullable": true
},
"outputDatasets": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputDatasetLineage"
},
"nullable": true
},
"runDefinition": {
"nullable": true
},
"jobSpecification": {
"nullable": true
},
"primaryMetricName": {
"type": "string",
"nullable": true
},
"createdFrom": {
"$ref": "#/components/schemas/CreatedFromDto"
},
"cancelUri": {
"type": "string",
"nullable": true
},
"completeUri": {
"type": "string",
"nullable": true
},
"diagnosticsUri": {
"type": "string",
"nullable": true
},
"computeRequest": {
"$ref": "#/components/schemas/ComputeRequest"
},
"compute": {
"$ref": "#/components/schemas/Compute"
},
"retainForLifetimeOfWorkspace": {
"type": "boolean",
"nullable": true
},
"queueingInfo": {
"$ref": "#/components/schemas/QueueingInfo"
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TypedAssetReference"
},
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TypedAssetReference"
},
"nullable": true
}
},
"additionalProperties": false
},
"RunIndexEntity": {
"type": "object",
"properties": {
"schemaId": {
"type": "string",
"nullable": true
},
"entityId": {
"type": "string",
"nullable": true
},
"kind": {
"$ref": "#/components/schemas/EntityKind"
},
"annotations": {
"$ref": "#/components/schemas/RunAnnotations"
},
"properties": {
"$ref": "#/components/schemas/RunProperties"
},
"internal": {
"$ref": "#/components/schemas/ExtensibleObject"
},
"updateSequence": {
"type": "integer",
"format": "int64"
},
"type": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true,
"readOnly": true
},
"entityContainerId": {
"type": "string",
"nullable": true,
"readOnly": true
},
"entityObjectId": {
"type": "string",
"nullable": true,
"readOnly": true
},
"resourceType": {
"type": "string",
"nullable": true,
"readOnly": true
},
"relationships": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Relationship"
},
"nullable": true
},
"assetId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunIndexMetricSummary": {
"type": "object",
"properties": {
"count": {
"type": "integer",
"format": "int64"
},
"lastValue": {
"nullable": true
},
"minimumValue": {
"nullable": true
},
"maximumValue": {
"nullable": true
},
"metricType": {
"type": "string",
"nullable": true
}
}
},
"RunIndexMetricSummarySystemObject": {
"type": "object",
"properties": {
"count": {
"type": "integer",
"format": "int64"
},
"lastValue": {
"nullable": true
},
"minimumValue": {
"nullable": true
},
"maximumValue": {
"nullable": true
},
"metricType": {
"type": "string",
"nullable": true
}
}
},
"RunIndexResourceMetricSummary": {
"type": "object",
"properties": {
"gpuUtilizationPercentLastHour": {
"type": "number",
"format": "double",
"nullable": true
},
"gpuMemoryUtilizationPercentLastHour": {
"type": "number",
"format": "double",
"nullable": true
},
"gpuEnergyJoules": {
"type": "number",
"format": "double",
"nullable": true
},
"resourceMetricNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
}
},
"RunMetricDto": {
"type": "object",
"properties": {
"runId": {
"type": "string",
"nullable": true
},
"metricId": {
"type": "string",
"format": "uuid"
},
"dataContainerId": {
"type": "string",
"nullable": true
},
"metricType": {
"type": "string",
"nullable": true
},
"createdUtc": {
"type": "string",
"format": "date-time",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"numCells": {
"type": "integer",
"format": "int32"
},
"dataLocation": {
"type": "string",
"nullable": true
},
"cells": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"schema": {
"$ref": "#/components/schemas/MetricSchemaDto"
}
},
"additionalProperties": false
},
"RunMetricsTypesDto": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunProperties": {
"type": "object",
"properties": {
"dataContainerId": {
"type": "string",
"nullable": true
},
"targetName": {
"type": "string",
"nullable": true
},
"runName": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"rootRunId": {
"type": "string",
"nullable": true
},
"runType": {
"type": "string",
"nullable": true
},
"runTypeV2": {
"$ref": "#/components/schemas/RunTypeV2Index"
},
"scriptName": {
"type": "string",
"nullable": true
},
"experimentId": {
"type": "string",
"nullable": true
},
"runUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"parentRunUuid": {
"type": "string",
"format": "uuid",
"nullable": true
},
"runNumber": {
"type": "integer",
"format": "int32"
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"computeRequest": {
"$ref": "#/components/schemas/ComputeRequest"
},
"compute": {
"$ref": "#/components/schemas/Compute"
},
"userProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"actionUris": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"duration": {
"type": "string",
"format": "date-span",
"nullable": true
},
"durationMilliseconds": {
"type": "number",
"format": "double",
"nullable": true
},
"creationContext": {
"$ref": "#/components/schemas/CreationContext"
}
}
},
"RunSettingParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"parameterType": {
"$ref": "#/components/schemas/RunSettingParameterType"
},
"isOptional": {
"type": "boolean",
"nullable": true
},
"defaultValue": {
"type": "string",
"nullable": true
},
"lowerBound": {
"type": "string",
"nullable": true
},
"upperBound": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"runSettingUIHint": {
"$ref": "#/components/schemas/RunSettingUIParameterHint"
},
"argumentName": {
"type": "string",
"nullable": true
},
"sectionName": {
"type": "string",
"nullable": true
},
"sectionDescription": {
"type": "string",
"nullable": true
},
"sectionArgumentName": {
"type": "string",
"nullable": true
},
"examples": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enumValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enumValuesToArgumentStrings": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enabledByParameterName": {
"type": "string",
"nullable": true
},
"enabledByParameterValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"disabledByParameters": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"moduleRunSettingType": {
"$ref": "#/components/schemas/ModuleRunSettingTypes"
},
"linkedParameterDefaultValueMapping": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"linkedParameterKeyName": {
"type": "string",
"nullable": true
},
"supportLinkSetting": {
"type": "boolean"
}
},
"additionalProperties": false
},
"RunSettingParameterAssignment": {
"type": "object",
"properties": {
"useGraphDefaultCompute": {
"type": "boolean",
"nullable": true
},
"mlcComputeType": {
"type": "string",
"nullable": true
},
"computeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"linkedParameterName": {
"type": "string",
"nullable": true
},
"valueType": {
"$ref": "#/components/schemas/ParameterValueType"
},
"assignmentsToConcatenate": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"dataPathAssignment": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"dataSetDefinitionValueAssignment": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"name": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunSettingParameterType": {
"enum": [
"Undefined",
"Int",
"Double",
"Bool",
"String",
"JsonString",
"YamlString",
"StringList"
],
"type": "string"
},
"RunSettingUIParameterHint": {
"type": "object",
"properties": {
"uiWidgetType": {
"$ref": "#/components/schemas/RunSettingUIWidgetTypeEnum"
},
"jsonEditor": {
"$ref": "#/components/schemas/UIJsonEditor"
},
"yamlEditor": {
"$ref": "#/components/schemas/UIYamlEditor"
},
"computeSelection": {
"$ref": "#/components/schemas/UIComputeSelection"
},
"hyperparameterConfiguration": {
"$ref": "#/components/schemas/UIHyperparameterConfiguration"
},
"uxIgnore": {
"type": "boolean"
},
"anonymous": {
"type": "boolean"
},
"supportReset": {
"type": "boolean"
}
},
"additionalProperties": false
},
"RunSettingUIWidgetTypeEnum": {
"enum": [
"Default",
"ComputeSelection",
"JsonEditor",
"Mode",
"SearchSpaceParameter",
"SectionToggle",
"YamlEditor",
"EnableRuntimeSweep",
"DataStoreSelection",
"Checkbox",
"MultipleSelection",
"HyperparameterConfiguration",
"JsonTextBox",
"Connection",
"Static"
],
"type": "string"
},
"RunStatus": {
"enum": [
"NotStarted",
"Unapproved",
"Pausing",
"Paused",
"Starting",
"Preparing",
"Queued",
"Running",
"Finalizing",
"CancelRequested",
"Completed",
"Failed",
"Canceled"
],
"type": "string"
},
"RunStatusPeriod": {
"type": "object",
"properties": {
"status": {
"$ref": "#/components/schemas/RunStatus"
},
"subPeriods": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubStatusPeriod"
},
"nullable": true
},
"start": {
"type": "integer",
"format": "int64",
"nullable": true
},
"end": {
"type": "integer",
"format": "int64",
"nullable": true
}
},
"additionalProperties": false
},
"RunType": {
"enum": [
"HTTP",
"SDK",
"Schedule",
"Portal"
],
"type": "string"
},
"RunTypeV2": {
"type": "object",
"properties": {
"orchestrator": {
"type": "string",
"nullable": true
},
"traits": {
"uniqueItems": true,
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"attribution": {
"type": "string",
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RunTypeV2Index": {
"type": "object",
"properties": {
"orchestrator": {
"type": "string",
"nullable": true
},
"traits": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"attribution": {
"type": "string",
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RuntimeConfiguration": {
"type": "object",
"properties": {
"baseImage": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"RuntimeStatusEnum": {
"enum": [
"Unavailable",
"Failed",
"NotExist",
"Starting",
"Stopping"
],
"type": "string"
},
"RuntimeType": {
"enum": [
"ManagedOnlineEndpoint",
"ComputeInstance",
"TrainingSession"
],
"type": "string"
},
"SampleMeta": {
"type": "object",
"properties": {
"image": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"docLink": {
"type": "string",
"nullable": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"createdAt": {
"type": "string",
"format": "date-time"
},
"updatedAt": {
"type": "string",
"format": "date-time"
},
"feedName": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SamplingAlgorithmType": {
"enum": [
"Random",
"Grid",
"Bayesian"
],
"type": "string"
},
"SavePipelineDraftRequest": {
"type": "object",
"properties": {
"uiWidgetMetaInfos": {
"type": "array",
"items": {
"$ref": "#/components/schemas/UIWidgetMetaInfo"
},
"nullable": true
},
"webServiceInputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WebServicePort"
},
"nullable": true
},
"webServiceOutputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WebServicePort"
},
"nullable": true
},
"nodesInDraft": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"pipelineType": {
"$ref": "#/components/schemas/PipelineType"
},
"pipelineDraftMode": {
"$ref": "#/components/schemas/PipelineDraftMode"
},
"graphComponentsMode": {
"$ref": "#/components/schemas/GraphComponentsMode"
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"flattenedSubGraphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineSubDraft"
},
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"SavedDataSetReference": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ScheduleBase": {
"type": "object",
"properties": {
"scheduleStatus": {
"$ref": "#/components/schemas/MfeInternalScheduleStatus"
},
"scheduleType": {
"$ref": "#/components/schemas/ScheduleType"
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"timeZone": {
"type": "string",
"nullable": true
},
"expression": {
"type": "string",
"nullable": true
},
"frequency": {
"$ref": "#/components/schemas/RecurrenceFrequency"
},
"interval": {
"type": "integer",
"format": "int32"
},
"pattern": {
"$ref": "#/components/schemas/RecurrencePattern"
}
},
"additionalProperties": false
},
"ScheduleProvisioningStatus": {
"enum": [
"Creating",
"Updating",
"Deleting",
"Succeeded",
"Failed",
"Canceled"
],
"type": "string"
},
"ScheduleStatus": {
"enum": [
"Enabled",
"Disabled"
],
"type": "string"
},
"ScheduleType": {
"enum": [
"Cron",
"Recurrence"
],
"type": "string"
},
"SchemaContractsCreatedBy": {
"type": "object",
"properties": {
"userObjectId": {
"type": "string",
"nullable": true
},
"userTenantId": {
"type": "string",
"nullable": true
},
"userName": {
"type": "string",
"nullable": true
},
"userPrincipalName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ScopeCloudConfiguration": {
"type": "object",
"properties": {
"inputPathSuffixes": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"description": "This is a dictionary",
"nullable": true
},
"outputPathSuffixes": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"description": "This is a dictionary",
"nullable": true
},
"userAlias": {
"type": "string",
"nullable": true
},
"tokens": {
"type": "integer",
"format": "int32",
"nullable": true
},
"autoToken": {
"type": "integer",
"format": "int32",
"nullable": true
},
"vcp": {
"type": "number",
"format": "float",
"nullable": true
}
},
"additionalProperties": false
},
"ScopeType": {
"enum": [
"Global",
"Tenant",
"Subscription",
"ResourceGroup",
"Workspace"
],
"type": "string"
},
"ScriptType": {
"enum": [
"Python",
"Notebook"
],
"type": "string"
},
"Seasonality": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/SeasonalityMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"SeasonalityMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"SecretConfiguration": {
"type": "object",
"properties": {
"workspace_secret_name": {
"type": "string",
"nullable": true
},
"uri": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"Section": {
"enum": [
"Gallery",
"Template"
],
"type": "string"
},
"SegmentedResult`1": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowIndexEntity"
},
"nullable": true
},
"continuationToken": {
"type": "string",
"nullable": true
},
"count": {
"type": "integer",
"format": "int32",
"nullable": true
},
"nextLink": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ServiceLogRequest": {
"type": "object",
"properties": {
"logLevel": {
"$ref": "#/components/schemas/LogLevel"
},
"message": {
"type": "string",
"nullable": true
},
"timestamp": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"SessionApplication": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
},
"image": {
"type": "string",
"nullable": true
},
"envVars": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"pythonPipRequirements": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"volumes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Volume"
},
"nullable": true
},
"setupResults": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SessionApplicationRunCommandResult"
},
"nullable": true
},
"port": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"SessionApplicationRunCommandResult": {
"type": "object",
"properties": {
"command": {
"type": "string",
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"exitCode": {
"type": "integer",
"format": "int32"
},
"stdOut": {
"type": "string",
"nullable": true
},
"stdErr": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SessionConfigModeEnum": {
"enum": [
"Default",
"ForceInstallPackage",
"ForceReset"
],
"type": "string"
},
"SessionProperties": {
"type": "object",
"properties": {
"sessionId": {
"type": "string",
"nullable": true
},
"subscriptionId": {
"type": "string",
"nullable": true
},
"resourceGroupName": {
"type": "string",
"nullable": true
},
"workspaceName": {
"type": "string",
"nullable": true
},
"existingUserComputeInstanceName": {
"type": "string",
"nullable": true
},
"userObjectId": {
"type": "string",
"nullable": true
},
"userTenantId": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64"
},
"applications": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SessionApplication"
},
"nullable": true
},
"application": {
"$ref": "#/components/schemas/SessionApplication"
},
"lastAliveTime": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"SessionSetupModeEnum": {
"enum": [
"ClientWait",
"SystemWait"
],
"type": "string"
},
"SetupFlowSessionAction": {
"enum": [
"Install",
"Reset",
"Update",
"Delete"
],
"type": "string"
},
"SetupFlowSessionRequest": {
"type": "object",
"properties": {
"action": {
"$ref": "#/components/schemas/SetupFlowSessionAction"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SeverityLevel": {
"enum": [
"Critical",
"Error",
"Warning",
"Info"
],
"type": "string"
},
"SharingScope": {
"type": "object",
"properties": {
"type": {
"$ref": "#/components/schemas/ScopeType"
},
"identifier": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ShortSeriesHandlingConfiguration": {
"enum": [
"Auto",
"Pad",
"Drop"
],
"type": "string"
},
"Snapshot": {
"type": "object",
"properties": {
"id": {
"type": "string",
"format": "uuid",
"nullable": true
},
"directoryName": {
"type": "string",
"nullable": true
},
"snapshotAssetId": {
"type": "string",
"nullable": true
},
"snapshotEntityId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SnapshotInfo": {
"type": "object",
"properties": {
"rootDownloadUrl": {
"type": "string",
"nullable": true
},
"snapshots": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DownloadResourceInfo"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"SourceCodeDataReference": {
"type": "object",
"properties": {
"dataStoreName": {
"type": "string",
"nullable": true
},
"path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SparkConfiguration": {
"type": "object",
"properties": {
"configuration": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"sparkPoolResourceId": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SparkJarTaskDto": {
"type": "object",
"properties": {
"main_class_name": {
"type": "string",
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"SparkJob": {
"type": "object",
"properties": {
"jobType": {
"$ref": "#/components/schemas/JobType"
},
"resources": {
"$ref": "#/components/schemas/SparkResourceConfiguration"
},
"args": {
"type": "string",
"nullable": true
},
"codeId": {
"type": "string",
"nullable": true
},
"entry": {
"$ref": "#/components/schemas/SparkJobEntry"
},
"pyFiles": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"jars": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"files": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"archives": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"environmentId": {
"type": "string",
"nullable": true
},
"inputDataBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDataBinding"
},
"nullable": true
},
"outputDataBindings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputDataBinding"
},
"nullable": true
},
"conf": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"provisioningState": {
"$ref": "#/components/schemas/JobProvisioningState"
},
"parentJobName": {
"type": "string",
"nullable": true
},
"displayName": {
"type": "string",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"interactionEndpoints": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/JobEndpoint"
},
"nullable": true
},
"identity": {
"$ref": "#/components/schemas/MfeInternalIdentityConfiguration"
},
"compute": {
"$ref": "#/components/schemas/ComputeConfiguration"
},
"priority": {
"type": "integer",
"format": "int32",
"nullable": true
},
"output": {
"$ref": "#/components/schemas/JobOutputArtifacts"
},
"isArchived": {
"type": "boolean"
},
"schedule": {
"$ref": "#/components/schemas/ScheduleBase"
},
"componentId": {
"type": "string",
"nullable": true
},
"notificationSetting": {
"$ref": "#/components/schemas/NotificationSetting"
},
"secretsConfiguration": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MfeInternalSecretConfiguration"
},
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"SparkJobEntry": {
"type": "object",
"properties": {
"file": {
"type": "string",
"nullable": true
},
"className": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SparkMavenPackage": {
"type": "object",
"properties": {
"group": {
"type": "string",
"nullable": true
},
"artifact": {
"type": "string",
"nullable": true
},
"version": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SparkPythonTaskDto": {
"type": "object",
"properties": {
"python_file": {
"type": "string",
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"SparkResourceConfiguration": {
"type": "object",
"properties": {
"instanceType": {
"type": "string",
"nullable": true
},
"runtimeVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SparkSection": {
"type": "object",
"properties": {
"repositories": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"packages": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SparkMavenPackage"
},
"nullable": true
},
"precachePackages": {
"type": "boolean"
}
},
"additionalProperties": false
},
"SparkSubmitTaskDto": {
"type": "object",
"properties": {
"parameters": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"SqlDataPath": {
"type": "object",
"properties": {
"sqlTableName": {
"type": "string",
"nullable": true
},
"sqlQuery": {
"type": "string",
"nullable": true
},
"sqlStoredProcedureName": {
"type": "string",
"nullable": true
},
"sqlStoredProcedureParams": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StoredProcedureParameter"
},
"nullable": true
}
},
"additionalProperties": false
},
"StackEnsembleSettings": {
"type": "object",
"properties": {
"stackMetaLearnerType": {
"$ref": "#/components/schemas/StackMetaLearnerType"
},
"stackMetaLearnerTrainPercentage": {
"type": "number",
"format": "double",
"nullable": true
},
"stackMetaLearnerKWargs": {
"nullable": true
}
},
"additionalProperties": false
},
"StackMetaLearnerType": {
"enum": [
"None",
"LogisticRegression",
"LogisticRegressionCV",
"LightGBMClassifier",
"ElasticNet",
"ElasticNetCV",
"LightGBMRegressor",
"LinearRegression"
],
"type": "string"
},
"StandbyPoolProperties": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"count": {
"type": "integer",
"format": "int32"
},
"vmSize": {
"type": "string",
"nullable": true
},
"standbyAvailableInstances": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StandbyPoolResourceStatus"
},
"nullable": true
}
},
"additionalProperties": false
},
"StandbyPoolResourceStatus": {
"type": "object",
"properties": {
"status": {
"type": "string",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/CloudError"
}
},
"additionalProperties": false
},
"StartRunResult": {
"required": [
"runId"
],
"type": "object",
"properties": {
"runId": {
"minLength": 1,
"type": "string"
}
},
"additionalProperties": false
},
"StepRunProfile": {
"type": "object",
"properties": {
"stepRunId": {
"type": "string",
"nullable": true
},
"stepRunNumber": {
"type": "integer",
"format": "int32",
"nullable": true
},
"runUrl": {
"type": "string",
"nullable": true
},
"computeTarget": {
"type": "string",
"nullable": true
},
"computeTargetUrl": {
"type": "string",
"nullable": true
},
"nodeId": {
"type": "string",
"nullable": true
},
"nodeName": {
"type": "string",
"nullable": true
},
"stepName": {
"type": "string",
"nullable": true
},
"createTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"startTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"endTime": {
"type": "integer",
"format": "int64",
"nullable": true
},
"status": {
"$ref": "#/components/schemas/RunStatus"
},
"statusDetail": {
"type": "string",
"nullable": true
},
"isReused": {
"type": "boolean"
},
"reusedPipelineRunId": {
"type": "string",
"nullable": true
},
"reusedStepRunId": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"statusTimeline": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunStatusPeriod"
},
"nullable": true
}
},
"additionalProperties": false
},
"StorageAuthType": {
"enum": [
"MSI",
"ConnectionString",
"SAS"
],
"type": "string"
},
"StorageInfo": {
"type": "object",
"properties": {
"storageAuthType": {
"$ref": "#/components/schemas/StorageAuthType"
},
"connectionString": {
"type": "string",
"nullable": true
},
"sasToken": {
"type": "string",
"nullable": true
},
"accountName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"StoredProcedureParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"value": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/StoredProcedureParameterType"
}
},
"additionalProperties": false
},
"StoredProcedureParameterType": {
"enum": [
"String",
"Int",
"Decimal",
"Guid",
"Boolean",
"Date"
],
"type": "string"
},
"Stream": {
"type": "object",
"properties": {
"canRead": {
"type": "boolean",
"readOnly": true
},
"canWrite": {
"type": "boolean",
"readOnly": true
},
"canSeek": {
"type": "boolean",
"readOnly": true
},
"canTimeout": {
"type": "boolean",
"readOnly": true
},
"length": {
"type": "integer",
"format": "int64",
"readOnly": true
},
"position": {
"type": "integer",
"format": "int64"
},
"readTimeout": {
"type": "integer",
"format": "int32"
},
"writeTimeout": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"StructuredInterface": {
"type": "object",
"properties": {
"commandLinePattern": {
"type": "string",
"nullable": true
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StructuredInterfaceInput"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StructuredInterfaceOutput"
},
"nullable": true
},
"controlOutputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ControlOutput"
},
"nullable": true
},
"parameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StructuredInterfaceParameter"
},
"nullable": true
},
"metadataParameters": {
"type": "array",
"items": {
"$ref": "#/components/schemas/StructuredInterfaceParameter"
},
"nullable": true
},
"arguments": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ArgumentAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"StructuredInterfaceInput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"dataTypeIdsList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"isOptional": {
"type": "boolean"
},
"description": {
"type": "string",
"nullable": true
},
"skipProcessing": {
"type": "boolean"
},
"isResource": {
"type": "boolean"
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"datasetTypes": {
"uniqueItems": true,
"type": "array",
"items": {
"$ref": "#/components/schemas/DatasetType"
},
"nullable": true
}
},
"additionalProperties": false
},
"StructuredInterfaceOutput": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"dataTypeId": {
"type": "string",
"nullable": true
},
"passThroughDataTypeInputName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"skipProcessing": {
"type": "boolean"
},
"IsArtifact": {
"type": "boolean"
},
"dataStoreName": {
"type": "string",
"nullable": true
},
"dataStoreMode": {
"$ref": "#/components/schemas/AEVADataStoreMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
},
"overwrite": {
"type": "boolean"
},
"dataReferenceName": {
"type": "string",
"nullable": true
},
"trainingOutput": {
"$ref": "#/components/schemas/TrainingOutput"
},
"datasetOutput": {
"$ref": "#/components/schemas/DatasetOutput"
},
"AssetOutputSettings": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"EarlyAvailable": {
"type": "boolean"
}
},
"additionalProperties": false
},
"StructuredInterfaceParameter": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"label": {
"type": "string",
"nullable": true
},
"parameterType": {
"$ref": "#/components/schemas/ParameterType"
},
"isOptional": {
"type": "boolean"
},
"defaultValue": {
"type": "string",
"nullable": true
},
"lowerBound": {
"type": "string",
"nullable": true
},
"upperBound": {
"type": "string",
"nullable": true
},
"enumValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enumValuesToArgumentStrings": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"setEnvironmentVariable": {
"type": "boolean"
},
"environmentVariableOverride": {
"type": "string",
"nullable": true
},
"enabledByParameterName": {
"type": "string",
"nullable": true
},
"enabledByParameterValues": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"uiHint": {
"$ref": "#/components/schemas/UIParameterHint"
},
"groupNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"argumentName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"StudioMigrationInfo": {
"type": "object",
"properties": {
"sourceWorkspaceId": {
"type": "string",
"nullable": true
},
"sourceExperimentId": {
"type": "string",
"nullable": true
},
"sourceExperimentLink": {
"type": "string",
"nullable": true
},
"failedNodeIdList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"errorMessage": {
"type": "string",
"nullable": true,
"readOnly": true
}
},
"additionalProperties": false
},
"SubGraphConcatenateAssignment": {
"type": "object",
"properties": {
"concatenateParameter": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ParameterAssignment"
},
"nullable": true
},
"parameterAssignments": {
"$ref": "#/components/schemas/SubPipelineParameterAssignment"
}
},
"additionalProperties": false
},
"SubGraphConfiguration": {
"type": "object",
"properties": {
"graphId": {
"type": "string",
"nullable": true
},
"graphDraftId": {
"type": "string",
"nullable": true
},
"DefaultCloudPriority": {
"$ref": "#/components/schemas/CloudPrioritySetting"
},
"IsDynamic": {
"type": "boolean",
"default": false,
"nullable": true
}
},
"additionalProperties": false
},
"SubGraphConnectionInfo": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SubGraphDataPathParameterAssignment": {
"type": "object",
"properties": {
"dataSetPathParameter": {
"$ref": "#/components/schemas/DataSetPathParameter"
},
"dataSetPathParameterAssignments": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubGraphInfo": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"defaultComputeTarget": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDataStore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"id": {
"type": "string",
"nullable": true
},
"parentGraphId": {
"type": "string",
"nullable": true
},
"pipelineDefinitionId": {
"type": "string",
"nullable": true
},
"subGraphParameterAssignment": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphParameterAssignment"
},
"nullable": true
},
"subGraphConcatenateAssignment": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphConcatenateAssignment"
},
"nullable": true
},
"subGraphDataPathParameterAssignment": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphDataPathParameterAssignment"
},
"nullable": true
},
"subGraphDefaultComputeTargetNodes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"subGraphDefaultDataStoreNodes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphPortInfo"
},
"nullable": true
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphPortInfo"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubGraphParameterAssignment": {
"type": "object",
"properties": {
"parameter": {
"$ref": "#/components/schemas/Parameter"
},
"parameterAssignments": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubPipelineParameterAssignment"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubGraphPortInfo": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"internal": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphConnectionInfo"
},
"nullable": true
},
"external": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphConnectionInfo"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubPipelineDefinition": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"defaultComputeTarget": {
"$ref": "#/components/schemas/ComputeSetting"
},
"defaultDataStore": {
"$ref": "#/components/schemas/DatastoreSetting"
},
"pipelineFunctionName": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"parentDefinitionId": {
"type": "string",
"nullable": true
},
"fromModuleName": {
"type": "string",
"nullable": true
},
"parameterList": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Kwarg"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubPipelineParameterAssignment": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"parameterName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"SubPipelinesInfo": {
"type": "object",
"properties": {
"subGraphInfo": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubGraphInfo"
},
"nullable": true
},
"nodeIdToSubGraphIdMapping": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"subPipelineDefinition": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubPipelineDefinition"
},
"nullable": true
}
},
"additionalProperties": false
},
"SubStatusPeriod": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"subPeriods": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SubStatusPeriod"
},
"nullable": true
},
"start": {
"type": "integer",
"format": "int64",
"nullable": true
},
"end": {
"type": "integer",
"format": "int64",
"nullable": true
}
},
"additionalProperties": false
},
"SubmitBulkRunRequest": {
"type": "object",
"properties": {
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"flowDefinitionResourceId": {
"type": "string",
"nullable": true
},
"flowDefinitionDataStoreName": {
"type": "string",
"nullable": true
},
"flowDefinitionBlobPath": {
"type": "string",
"nullable": true
},
"flowDefinitionDataUri": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"runDisplayName": {
"type": "string",
"nullable": true
},
"runExperimentName": {
"type": "string",
"nullable": true
},
"nodeVariant": {
"type": "string",
"nullable": true
},
"variantRunId": {
"type": "string",
"nullable": true
},
"baselineRunId": {
"type": "string",
"nullable": true
},
"sessionId": {
"type": "string",
"nullable": true
},
"sessionSetupMode": {
"$ref": "#/components/schemas/SessionSetupModeEnum"
},
"sessionConfigMode": {
"$ref": "#/components/schemas/SessionConfigModeEnum"
},
"flowLineageId": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
},
"computeName": {
"type": "string",
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"SubmitBulkRunResponse": {
"type": "object",
"properties": {
"nextActionIntervalInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"actionType": {
"$ref": "#/components/schemas/ActionType"
},
"flow_runs": {
"type": "array",
"items": { },
"nullable": true
},
"node_runs": {
"type": "array",
"items": { },
"nullable": true
},
"errorResponse": {
"$ref": "#/components/schemas/ErrorResponse"
},
"flowName": {
"type": "string",
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"flowRunId": {
"type": "string",
"nullable": true
},
"flowGraph": {
"$ref": "#/components/schemas/FlowGraph"
},
"flowGraphLayout": {
"$ref": "#/components/schemas/FlowGraphLayout"
},
"flowRunResourceId": {
"type": "string",
"nullable": true
},
"bulkTestId": {
"type": "string",
"nullable": true
},
"batchInputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"createdBy": {
"$ref": "#/components/schemas/SchemaContractsCreatedBy"
},
"createdOn": {
"type": "string",
"format": "date-time",
"nullable": true
},
"flowRunType": {
"$ref": "#/components/schemas/FlowRunTypeEnum"
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"runtimeName": {
"type": "string",
"nullable": true
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"flowRunLogs": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flowTestMode": {
"$ref": "#/components/schemas/FlowTestMode"
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"workingDirectory": {
"type": "string",
"nullable": true
},
"flowDagFileRelativePath": {
"type": "string",
"nullable": true
},
"flowSnapshotId": {
"type": "string",
"nullable": true
},
"variantRunToEvaluationRunsIdMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"SubmitFlowRequest": {
"type": "object",
"properties": {
"flowRunId": {
"type": "string",
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"flowId": {
"type": "string",
"nullable": true
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowSubmitRunSettings": {
"$ref": "#/components/schemas/FlowSubmitRunSettings"
},
"asyncSubmission": {
"type": "boolean"
},
"useWorkspaceConnection": {
"type": "boolean"
},
"enableAsyncFlowTest": {
"type": "boolean"
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
}
},
"additionalProperties": false
},
"SubmitPipelineRunRequest": {
"type": "object",
"properties": {
"computeTarget": {
"type": "string",
"nullable": true
},
"flattenedSubGraphs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PipelineSubDraft"
},
"nullable": true
},
"stepTags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"experimentName": {
"type": "string",
"nullable": true
},
"pipelineParameters": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"dataPathAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/LegacyDataPath"
},
"description": "This is a dictionary",
"nullable": true
},
"dataSetDefinitionValueAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/DataSetDefinitionValue"
},
"description": "This is a dictionary",
"nullable": true
},
"assetOutputSettingsAssignments": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/AssetOutputSettings"
},
"description": "This is a dictionary",
"nullable": true
},
"enableNotification": {
"type": "boolean",
"nullable": true
},
"subPipelinesInfo": {
"$ref": "#/components/schemas/SubPipelinesInfo"
},
"displayName": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"parentRunId": {
"type": "string",
"nullable": true
},
"graph": {
"$ref": "#/components/schemas/GraphDraftEntity"
},
"pipelineRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameterAssignment"
},
"nullable": true
},
"moduleNodeRunSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeRunSetting"
},
"nullable": true
},
"moduleNodeUIInputSettings": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GraphModuleNodeUIInputSetting"
},
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"continueRunOnStepFailure": {
"type": "boolean",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"enforceRerun": {
"type": "boolean",
"nullable": true
},
"datasetAccessModes": {
"$ref": "#/components/schemas/DatasetAccessModes"
}
},
"additionalProperties": false
},
"SuccessfulCommandReturnCode": {
"enum": [
"Zero",
"ZeroOrGreater"
],
"type": "string"
},
"SweepEarlyTerminationPolicy": {
"type": "object",
"properties": {
"policyType": {
"$ref": "#/components/schemas/EarlyTerminationPolicyType"
},
"evaluationInterval": {
"type": "integer",
"format": "int32",
"nullable": true
},
"delayEvaluation": {
"type": "integer",
"format": "int32",
"nullable": true
},
"slackFactor": {
"type": "number",
"format": "float",
"nullable": true
},
"slackAmount": {
"type": "number",
"format": "float",
"nullable": true
},
"truncationPercentage": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"SweepSettings": {
"type": "object",
"properties": {
"limits": {
"$ref": "#/components/schemas/SweepSettingsLimits"
},
"searchSpace": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"nullable": true
},
"samplingAlgorithm": {
"$ref": "#/components/schemas/SamplingAlgorithmType"
},
"earlyTermination": {
"$ref": "#/components/schemas/SweepEarlyTerminationPolicy"
}
},
"additionalProperties": false
},
"SweepSettingsLimits": {
"type": "object",
"properties": {
"maxTotalTrials": {
"type": "integer",
"format": "int32",
"nullable": true
},
"maxConcurrentTrials": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"SystemData": {
"type": "object",
"properties": {
"createdAt": {
"type": "string",
"format": "date-time",
"nullable": true
},
"createdBy": {
"type": "string",
"nullable": true
},
"createdByType": {
"$ref": "#/components/schemas/UserType"
},
"lastModifiedAt": {
"type": "string",
"format": "date-time",
"nullable": true
},
"lastModifiedBy": {
"type": "string",
"nullable": true
},
"lastModifiedByType": {
"$ref": "#/components/schemas/UserType"
}
},
"additionalProperties": false
},
"SystemMeta": {
"type": "object",
"properties": {
"identifierHash": {
"type": "string",
"nullable": true
},
"extraHash": {
"type": "string",
"nullable": true
},
"contentHash": {
"type": "string",
"nullable": true
},
"identifierHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
},
"extraHashes": {
"type": "object",
"properties": {
"IdentifierHash": {
"type": "string"
},
"IdentifierHashV2": {
"type": "string"
}
},
"additionalProperties": false,
"nullable": true
}
},
"additionalProperties": false
},
"TabularTrainingMode": {
"enum": [
"Distributed",
"NonDistributed",
"Auto"
],
"type": "string"
},
"TargetAggregationFunction": {
"enum": [
"Sum",
"Max",
"Min",
"Mean"
],
"type": "string"
},
"TargetLags": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/TargetLagsMode"
},
"values": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"nullable": true
}
},
"additionalProperties": false
},
"TargetLagsMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"TargetRollingWindowSize": {
"type": "object",
"properties": {
"mode": {
"$ref": "#/components/schemas/TargetRollingWindowSizeMode"
},
"value": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"TargetRollingWindowSizeMode": {
"enum": [
"Auto",
"Custom"
],
"type": "string"
},
"TargetSelectorConfiguration": {
"type": "object",
"properties": {
"lowPriorityVMTolerant": {
"type": "boolean"
},
"clusterBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
},
"instanceType": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"instanceTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"myResourceOnly": {
"type": "boolean"
},
"planId": {
"type": "string",
"nullable": true
},
"planRegionId": {
"type": "string",
"nullable": true
},
"region": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"regions": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"vcBlockList": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"Task": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int32",
"readOnly": true
},
"exception": {
"nullable": true,
"readOnly": true
},
"status": {
"$ref": "#/components/schemas/TaskStatus"
},
"isCanceled": {
"type": "boolean",
"readOnly": true
},
"isCompleted": {
"type": "boolean",
"readOnly": true
},
"isCompletedSuccessfully": {
"type": "boolean",
"readOnly": true
},
"creationOptions": {
"$ref": "#/components/schemas/TaskCreationOptions"
},
"asyncState": {
"nullable": true,
"readOnly": true
},
"isFaulted": {
"type": "boolean",
"readOnly": true
}
},
"additionalProperties": false
},
"TaskControlFlowInfo": {
"type": "object",
"properties": {
"controlFlowType": {
"$ref": "#/components/schemas/ControlFlowType"
},
"iterationIndex": {
"type": "integer",
"format": "int32"
},
"itemName": {
"type": "string",
"nullable": true
},
"parametersOverwritten": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"isReused": {
"type": "boolean"
}
},
"additionalProperties": false
},
"TaskCreationOptions": {
"enum": [
"None",
"PreferFairness",
"LongRunning",
"AttachedToParent",
"DenyChildAttach",
"HideScheduler",
"RunContinuationsAsynchronously"
],
"type": "string"
},
"TaskReuseInfo": {
"type": "object",
"properties": {
"experimentId": {
"type": "string",
"nullable": true
},
"pipelineRunId": {
"type": "string",
"nullable": true
},
"nodeId": {
"type": "string",
"nullable": true
},
"requestId": {
"type": "string",
"nullable": true
},
"runId": {
"type": "string",
"nullable": true
},
"nodeStartTime": {
"type": "string",
"format": "date-time"
},
"nodeEndTime": {
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"TaskStatus": {
"enum": [
"Created",
"WaitingForActivation",
"WaitingToRun",
"Running",
"WaitingForChildrenToComplete",
"RanToCompletion",
"Canceled",
"Faulted"
],
"type": "string"
},
"TaskStatusCode": {
"enum": [
"NotStarted",
"Queued",
"Running",
"Failed",
"Finished",
"Canceled",
"PartiallyExecuted",
"Bypassed"
],
"type": "string"
},
"TaskType": {
"enum": [
"Classification",
"Regression",
"Forecasting",
"ImageClassification",
"ImageClassificationMultilabel",
"ImageObjectDetection",
"ImageInstanceSegmentation",
"TextClassification",
"TextMultiLabeling",
"TextNER",
"TextClassificationMultilabel"
],
"type": "string"
},
"TensorflowConfiguration": {
"type": "object",
"properties": {
"workerCount": {
"type": "integer",
"format": "int32"
},
"parameterServerCount": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"TestDataSettings": {
"type": "object",
"properties": {
"testDataSize": {
"type": "number",
"format": "double",
"nullable": true
}
},
"additionalProperties": false
},
"Tool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/ToolType"
},
"inputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"outputs": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/OutputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"connection_type": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionType"
},
"nullable": true
},
"module": {
"type": "string",
"nullable": true
},
"class_name": {
"type": "string",
"nullable": true
},
"source": {
"type": "string",
"nullable": true
},
"lkgCode": {
"type": "string",
"nullable": true
},
"code": {
"type": "string",
"nullable": true
},
"function": {
"type": "string",
"nullable": true
},
"action_type": {
"type": "string",
"nullable": true
},
"provider_config": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"function_config": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/InputDefinition"
},
"description": "This is a dictionary",
"nullable": true
},
"icon": {
"nullable": true
},
"category": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary",
"nullable": true
},
"is_builtin": {
"type": "boolean"
},
"package": {
"type": "string",
"nullable": true
},
"package_version": {
"type": "string",
"nullable": true
},
"default_prompt": {
"type": "string",
"nullable": true
},
"enable_kwargs": {
"type": "boolean"
},
"deprecated_tools": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"tool_state": {
"$ref": "#/components/schemas/ToolState"
}
},
"additionalProperties": false
},
"ToolFuncCallScenario": {
"enum": [
"generated_by",
"reverse_generated_by",
"dynamic_list"
],
"type": "string"
},
"ToolFuncResponse": {
"type": "object",
"properties": {
"result": {
"nullable": true
},
"logs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ToolInputDynamicList": {
"type": "object",
"properties": {
"func_path": {
"type": "string",
"nullable": true
},
"func_kwargs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
}
},
"additionalProperties": false
},
"ToolInputGeneratedBy": {
"type": "object",
"properties": {
"func_path": {
"type": "string",
"nullable": true
},
"func_kwargs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"reverse_func_path": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ToolMetaDto": {
"type": "object",
"properties": {
"tools": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/Tool"
},
"description": "This is a dictionary",
"nullable": true
},
"errors": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/ErrorResponse"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"ToolSetting": {
"type": "object",
"properties": {
"providers": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ProviderEntity"
},
"nullable": true
}
},
"additionalProperties": false
},
"ToolSourceMeta": {
"type": "object",
"properties": {
"tool_type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ToolState": {
"enum": [
"Stable",
"Preview",
"Deprecated"
],
"type": "string"
},
"ToolType": {
"enum": [
"llm",
"python",
"action",
"prompt",
"custom_llm",
"csharp",
"typescript"
],
"type": "string"
},
"TorchDistributedConfiguration": {
"type": "object",
"properties": {
"processCountPerNode": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"TrainingDiagnosticConfiguration": {
"type": "object",
"properties": {
"jobHeartBeatTimeoutSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
}
},
"additionalProperties": false
},
"TrainingOutput": {
"type": "object",
"properties": {
"trainingOutputType": {
"$ref": "#/components/schemas/TrainingOutputType"
},
"iteration": {
"type": "integer",
"format": "int32",
"nullable": true
},
"metric": {
"type": "string",
"nullable": true
},
"modelFile": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"TrainingOutputType": {
"enum": [
"Metrics",
"Model"
],
"type": "string"
},
"TrainingSettings": {
"type": "object",
"properties": {
"blockListModels": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"allowListModels": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"enableDnnTraining": {
"type": "boolean",
"nullable": true
},
"enableOnnxCompatibleModels": {
"type": "boolean",
"nullable": true
},
"stackEnsembleSettings": {
"$ref": "#/components/schemas/StackEnsembleSettings"
},
"enableStackEnsemble": {
"type": "boolean",
"nullable": true
},
"enableVoteEnsemble": {
"type": "boolean",
"nullable": true
},
"ensembleModelDownloadTimeout": {
"type": "string",
"format": "date-span",
"nullable": true
},
"enableModelExplainability": {
"type": "boolean",
"nullable": true
},
"trainingMode": {
"$ref": "#/components/schemas/TabularTrainingMode"
}
},
"additionalProperties": false
},
"TriggerAsyncOperationStatus": {
"type": "object",
"properties": {
"id": {
"type": "string",
"nullable": true
},
"operationType": {
"$ref": "#/components/schemas/TriggerOperationType"
},
"provisioningStatus": {
"$ref": "#/components/schemas/ScheduleProvisioningStatus"
},
"createdTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"error": {
"$ref": "#/components/schemas/ErrorResponse"
},
"statusCode": {
"$ref": "#/components/schemas/HttpStatusCode"
}
},
"additionalProperties": false
},
"TriggerOperationType": {
"enum": [
"Create",
"Update",
"Delete",
"CreateOrUpdate"
],
"type": "string"
},
"TriggerType": {
"enum": [
"Recurrence",
"Cron"
],
"type": "string"
},
"TuningNodeRunSetting": {
"type": "object",
"properties": {
"simulationFlow": {
"$ref": "#/components/schemas/FlowGraphReference"
},
"simulationFlowRunSetting": {
"$ref": "#/components/schemas/FlowRunSettingsBase"
},
"batch_inputs": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": { },
"description": "This is a dictionary"
},
"nullable": true
},
"inputUniversalLink": {
"type": "string",
"nullable": true
},
"dataInputs": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"flowRunOutputDirectory": {
"type": "string",
"nullable": true
},
"connectionOverrides": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionOverrideSetting"
},
"nullable": true
},
"flowRunDisplayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"properties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"runtimeName": {
"type": "string",
"nullable": true
},
"batchDataInput": {
"$ref": "#/components/schemas/BatchDataInput"
},
"inputsMapping": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"connections": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary"
},
"description": "This is a dictionary",
"nullable": true
},
"environmentVariables": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "This is a dictionary",
"nullable": true
},
"outputDataStore": {
"type": "string",
"nullable": true
},
"runDisplayNameGenerationType": {
"$ref": "#/components/schemas/RunDisplayNameGenerationType"
},
"amlComputeName": {
"type": "string",
"nullable": true
},
"workerCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"timeoutInSeconds": {
"type": "integer",
"format": "int32",
"nullable": true
},
"promptflowEngineType": {
"$ref": "#/components/schemas/PromptflowEngineType"
}
},
"additionalProperties": false
},
"TuningNodeSetting": {
"type": "object",
"properties": {
"variantIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"tuningNodeRunSettings": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/TuningNodeRunSetting"
},
"description": "This is a dictionary",
"nullable": true
}
},
"additionalProperties": false
},
"TypedAssetReference": {
"type": "object",
"properties": {
"assetId": {
"type": "string",
"nullable": true
},
"type": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UIAzureOpenAIDeploymentNameSelector": {
"type": "object",
"properties": {
"Capabilities": {
"$ref": "#/components/schemas/UIAzureOpenAIModelCapabilities"
}
},
"additionalProperties": false
},
"UIAzureOpenAIModelCapabilities": {
"type": "object",
"properties": {
"Completion": {
"type": "boolean",
"nullable": true
},
"ChatCompletion": {
"type": "boolean",
"nullable": true
},
"Embeddings": {
"type": "boolean",
"nullable": true
}
},
"additionalProperties": false
},
"UIColumnPicker": {
"type": "object",
"properties": {
"columnPickerFor": {
"type": "string",
"nullable": true
},
"columnSelectionCategories": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"singleColumnSelection": {
"type": "boolean"
}
},
"additionalProperties": false
},
"UIComputeSelection": {
"type": "object",
"properties": {
"computeTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"requireGpu": {
"type": "boolean",
"nullable": true
},
"osTypes": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"supportServerless": {
"type": "boolean"
},
"computeRunSettingsMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/RunSettingParameter"
},
"nullable": true
},
"nullable": true
}
},
"additionalProperties": false
},
"UIHyperparameterConfiguration": {
"type": "object",
"properties": {
"modelNameToHyperParameterAndDistributionMapping": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
},
"nullable": true
},
"nullable": true
},
"distributionParametersMapping": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"$ref": "#/components/schemas/DistributionParameter"
},
"nullable": true
},
"nullable": true
},
"jsonSchema": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UIInputDataDeliveryMode": {
"enum": [
"Read-only mount",
"Read-write mount",
"Download",
"Direct",
"Evaluate mount",
"Evaluate download",
"Hdfs"
],
"type": "string"
},
"UIInputSetting": {
"type": "object",
"properties": {
"name": {
"type": "string",
"nullable": true
},
"dataDeliveryMode": {
"$ref": "#/components/schemas/UIInputDataDeliveryMode"
},
"pathOnCompute": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UIJsonEditor": {
"type": "object",
"properties": {
"jsonSchema": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UIParameterHint": {
"type": "object",
"properties": {
"uiWidgetType": {
"$ref": "#/components/schemas/UIWidgetTypeEnum"
},
"columnPicker": {
"$ref": "#/components/schemas/UIColumnPicker"
},
"uiScriptLanguage": {
"$ref": "#/components/schemas/UIScriptLanguageEnum"
},
"jsonEditor": {
"$ref": "#/components/schemas/UIJsonEditor"
},
"PromptFlowConnectionSelector": {
"$ref": "#/components/schemas/UIPromptFlowConnectionSelector"
},
"AzureOpenAIDeploymentNameSelector": {
"$ref": "#/components/schemas/UIAzureOpenAIDeploymentNameSelector"
},
"UxIgnore": {
"type": "boolean"
},
"Anonymous": {
"type": "boolean"
}
},
"additionalProperties": false
},
"UIPromptFlowConnectionSelector": {
"type": "object",
"properties": {
"PromptFlowConnectionType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UIScriptLanguageEnum": {
"enum": [
"None",
"Python",
"R",
"Json",
"Sql"
],
"type": "string"
},
"UIWidgetMetaInfo": {
"type": "object",
"properties": {
"moduleNodeId": {
"type": "string",
"nullable": true
},
"metaModuleId": {
"type": "string",
"nullable": true
},
"parameterName": {
"type": "string",
"nullable": true
},
"uiWidgetType": {
"$ref": "#/components/schemas/UIWidgetTypeEnum"
}
},
"additionalProperties": false
},
"UIWidgetTypeEnum": {
"enum": [
"Default",
"Mode",
"ColumnPicker",
"Credential",
"Script",
"ComputeSelection",
"JsonEditor",
"SearchSpaceParameter",
"SectionToggle",
"YamlEditor",
"EnableRuntimeSweep",
"DataStoreSelection",
"InstanceTypeSelection",
"ConnectionSelection",
"PromptFlowConnectionSelection",
"AzureOpenAIDeploymentNameSelection"
],
"type": "string"
},
"UIYamlEditor": {
"type": "object",
"properties": {
"jsonSchema": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UnversionedEntityRequestDto": {
"type": "object",
"properties": {
"unversionedEntityIds": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
}
},
"additionalProperties": false
},
"UnversionedEntityResponseDto": {
"type": "object",
"properties": {
"unversionedEntities": {
"type": "array",
"items": {
"$ref": "#/components/schemas/FlowIndexEntity"
},
"nullable": true
},
"unversionedEntityJsonSchema": {
"nullable": true
},
"normalizedRequestCharge": {
"type": "number",
"format": "double"
},
"normalizedRequestChargePeriod": {
"type": "string",
"format": "date-span"
}
},
"additionalProperties": false
},
"UnversionedRebuildIndexDto": {
"type": "object",
"properties": {
"continuationToken": {
"type": "string",
"nullable": true
},
"entityCount": {
"type": "integer",
"format": "int32",
"nullable": true
},
"entityContainerType": {
"type": "string",
"nullable": true
},
"entityType": {
"type": "string",
"nullable": true
},
"resourceId": {
"type": "string",
"nullable": true
},
"workspaceId": {
"type": "string",
"nullable": true
},
"immutableResourceId": {
"type": "string",
"format": "uuid"
},
"startTime": {
"type": "string",
"format": "date-time",
"nullable": true
},
"endTime": {
"type": "string",
"format": "date-time",
"nullable": true
}
},
"additionalProperties": false
},
"UnversionedRebuildResponseDto": {
"type": "object",
"properties": {
"entities": {
"$ref": "#/components/schemas/SegmentedResult`1"
},
"unversionedEntitySchema": {
"nullable": true
},
"normalizedRequestCharge": {
"type": "number",
"format": "double"
},
"normalizedRequestChargePeriod": {
"type": "string",
"format": "date-span"
}
},
"additionalProperties": false
},
"UpdateComponentRequest": {
"type": "object",
"properties": {
"displayName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"moduleUpdateOperationType": {
"$ref": "#/components/schemas/ModuleUpdateOperationType"
},
"moduleVersion": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UpdateFlowRequest": {
"type": "object",
"properties": {
"flowRunResult": {
"$ref": "#/components/schemas/FlowRunResult"
},
"flowTestMode": {
"$ref": "#/components/schemas/FlowTestMode"
},
"flowTestInfos": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/FlowTestInfo"
},
"nullable": true
},
"flowName": {
"type": "string",
"nullable": true
},
"description": {
"type": "string",
"nullable": true
},
"details": {
"type": "string",
"nullable": true
},
"tags": {
"type": "object",
"additionalProperties": {
"type": "string",
"nullable": true
},
"nullable": true
},
"flow": {
"$ref": "#/components/schemas/Flow"
},
"flowDefinitionFilePath": {
"type": "string",
"nullable": true
},
"flowType": {
"$ref": "#/components/schemas/FlowType"
},
"flowRunSettings": {
"$ref": "#/components/schemas/FlowRunSettings"
},
"isArchived": {
"type": "boolean"
},
"vmSize": {
"type": "string",
"nullable": true
},
"maxIdleTimeSeconds": {
"type": "integer",
"format": "int64",
"nullable": true
},
"identity": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"UpdateFlowRuntimeRequest": {
"type": "object",
"properties": {
"runtimeDescription": {
"type": "string",
"nullable": true
},
"environment": {
"type": "string",
"nullable": true
},
"instanceCount": {
"type": "integer",
"format": "int32"
}
},
"additionalProperties": false
},
"UpdateFlowStatusRequest": {
"type": "object",
"properties": {
"flowRunStatus": {
"$ref": "#/components/schemas/FlowRunStatusEnum"
},
"errorResponse": {
"$ref": "#/components/schemas/ErrorResponse"
}
},
"additionalProperties": false
},
"UpdateRegistryComponentRequest": {
"type": "object",
"properties": {
"registryName": {
"type": "string",
"nullable": true
},
"componentName": {
"type": "string",
"nullable": true
},
"componentVersion": {
"type": "string",
"nullable": true
},
"updateType": {
"$ref": "#/components/schemas/UpdateType"
}
},
"additionalProperties": false
},
"UpdateType": {
"enum": [
"SetDefaultVersion"
],
"type": "string"
},
"UploadOptions": {
"type": "object",
"properties": {
"overwrite": {
"type": "boolean"
},
"sourceGlobs": {
"$ref": "#/components/schemas/ExecutionGlobsOptions"
}
},
"additionalProperties": false
},
"UploadState": {
"enum": [
"Uploading",
"Completed",
"Canceled",
"Failed"
],
"type": "string"
},
"UriReference": {
"type": "object",
"properties": {
"path": {
"type": "string",
"nullable": true
},
"isFile": {
"type": "boolean"
}
},
"additionalProperties": false
},
"UseStl": {
"enum": [
"Season",
"SeasonTrend"
],
"type": "string"
},
"User": {
"type": "object",
"properties": {
"userObjectId": {
"type": "string",
"description": "A user or service principal's object ID.\r\nThis is EUPI and may only be logged to warm path telemetry.",
"nullable": true
},
"userPuId": {
"type": "string",
"description": "A user or service principal's PuID.\r\nThis is PII and should never be logged.",
"nullable": true
},
"userIdp": {
"type": "string",
"description": "A user identity provider. Eg live.com\r\nThis is PII and should never be logged.",
"nullable": true
},
"userAltSecId": {
"type": "string",
"description": "A user alternate sec id. This represents the user in a different identity provider system Eg.1:live.com:puid\r\nThis is PII and should never be logged.",
"nullable": true
},
"userIss": {
"type": "string",
"description": "The issuer which issed the token for this user.\r\nThis is PII and should never be logged.",
"nullable": true
},
"userTenantId": {
"type": "string",
"description": "A user or service principal's tenant ID.",
"nullable": true
},
"userName": {
"type": "string",
"description": "A user's full name or a service principal's app ID.\r\nThis is PII and should never be logged.",
"nullable": true
},
"upn": {
"type": "string",
"description": "A user's Principal name (upn)\r\nThis is PII andshould never be logged",
"nullable": true
}
},
"additionalProperties": false
},
"UserAssignedIdentity": {
"type": "object",
"properties": {
"principalId": {
"type": "string",
"format": "uuid"
},
"clientId": {
"type": "string",
"format": "uuid"
}
},
"additionalProperties": false
},
"UserType": {
"enum": [
"User",
"Application",
"ManagedIdentity",
"Key"
],
"type": "string"
},
"ValidationDataSettings": {
"type": "object",
"properties": {
"nCrossValidations": {
"$ref": "#/components/schemas/NCrossValidations"
},
"validationDataSize": {
"type": "number",
"format": "double",
"nullable": true
},
"cvSplitColumnNames": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"validationType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"ValidationStatus": {
"enum": [
"Succeeded",
"Failed"
],
"type": "string"
},
"ValueType": {
"enum": [
"int",
"double",
"bool",
"string",
"secret",
"prompt_template",
"object",
"list",
"BingConnection",
"OpenAIConnection",
"AzureOpenAIConnection",
"AzureContentModeratorConnection",
"CustomConnection",
"AzureContentSafetyConnection",
"SerpConnection",
"CognitiveSearchConnection",
"SubstrateLLMConnection",
"PineconeConnection",
"QdrantConnection",
"WeaviateConnection",
"function_list",
"function_str",
"FormRecognizerConnection",
"file_path",
"image",
"assistant_definition"
],
"type": "string"
},
"VariantIdentifier": {
"type": "object",
"properties": {
"variantId": {
"type": "string",
"nullable": true
},
"tuningNodeName": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"VariantNode": {
"type": "object",
"properties": {
"node": {
"$ref": "#/components/schemas/Node"
},
"description": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"VmPriority": {
"enum": [
"Dedicated",
"Lowpriority"
],
"type": "string"
},
"Volume": {
"type": "object",
"properties": {
"type": {
"type": "string",
"nullable": true
},
"source": {
"type": "string",
"nullable": true
},
"target": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"WebServiceComputeMetaInfo": {
"type": "object",
"properties": {
"nodeCount": {
"type": "integer",
"format": "int32"
},
"isSslEnabled": {
"type": "boolean"
},
"aksNotFound": {
"type": "boolean"
},
"clusterPurpose": {
"type": "string",
"nullable": true
},
"publicIpAddress": {
"type": "string",
"nullable": true
},
"vmSize": {
"type": "string",
"nullable": true
},
"location": {
"type": "string",
"nullable": true
},
"provisioningState": {
"type": "string",
"nullable": true
},
"state": {
"type": "string",
"nullable": true
},
"osType": {
"type": "string",
"nullable": true
},
"id": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"createdByStudio": {
"type": "boolean"
},
"isGpuType": {
"type": "boolean"
},
"resourceId": {
"type": "string",
"nullable": true
},
"computeType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"WebServicePort": {
"type": "object",
"properties": {
"nodeId": {
"type": "string",
"nullable": true
},
"portName": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"WebServiceState": {
"enum": [
"Transitioning",
"Healthy",
"Unhealthy",
"Failed",
"Unschedulable"
],
"type": "string"
},
"Webhook": {
"type": "object",
"properties": {
"webhookType": {
"$ref": "#/components/schemas/WebhookType"
},
"eventType": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"WebhookType": {
"enum": [
"AzureDevOps"
],
"type": "string"
},
"WeekDays": {
"enum": [
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday",
"Sunday"
],
"type": "string"
},
"Weekday": {
"enum": [
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday",
"Sunday"
],
"type": "string"
},
"WorkspaceConnectionSpec": {
"type": "object",
"properties": {
"connectionCategory": {
"$ref": "#/components/schemas/ConnectionCategory"
},
"flowValueType": {
"$ref": "#/components/schemas/ValueType"
},
"connectionType": {
"$ref": "#/components/schemas/ConnectionType"
},
"connectionTypeDisplayName": {
"type": "string",
"nullable": true
},
"configSpecs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ConnectionConfigSpec"
},
"nullable": true
},
"module": {
"type": "string",
"nullable": true
}
},
"additionalProperties": false
},
"YarnDeployMode": {
"enum": [
"None",
"Client",
"Cluster"
],
"type": "string"
}
},
"parameters": {
"subscriptionIdParameter": {
"name": "subscriptionId",
"in": "path",
"description": "The Azure Subscription ID.",
"required": true,
"schema": {
"type": "string",
"format": "uuid"
},
"x-ms-parameter-location": "method"
},
"resourceGroupNameParameter": {
"name": "resourceGroupName",
"in": "path",
"description": "The Name of the resource group in which the workspace is located.",
"required": true,
"schema": {
"type": "string"
},
"x-ms-parameter-location": "method"
},
"workspaceNameParameter": {
"name": "workspaceName",
"in": "path",
"description": "The name of the workspace.",
"required": true,
"schema": {
"type": "string"
},
"x-ms-parameter-location": "method"
}
},
"securitySchemes": {
"azure_auth": {
"type": "oauth2",
"flows": {
"implicit": {
"authorizationUrl": "https://login.microsoftonline.com/common/oauth2/authorize",
"scopes": {
"user_impersonation": "impersonate your user account"
}
}
}
}
}
},
"security": [
{
"azure_auth": [
"user_impersonation"
]
}
]
} | promptflow/src/promptflow/promptflow/azure/_restclient/swagger.json/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/swagger.json",
"repo_id": "promptflow",
"token_count": 491948
} | 24 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# flake8: noqa
from ._base_executor_proxy import AbstractExecutorProxy, APIBasedExecutorProxy
from ._batch_engine import BatchEngine
from ._csharp_executor_proxy import CSharpExecutorProxy
from ._python_executor_proxy import PythonExecutorProxy
from ._result import BatchResult
__all__ = [
"AbstractExecutorProxy",
"APIBasedExecutorProxy",
"BatchEngine",
"CSharpExecutorProxy",
"PythonExecutorProxy",
"BatchResult",
]
| promptflow/src/promptflow/promptflow/batch/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/batch/__init__.py",
"repo_id": "promptflow",
"token_count": 170
} | 25 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from enum import Enum
class RunMode(str, Enum):
"""An enumeration of possible run modes."""
Test = "Test"
SingleNode = "SingleNode"
Batch = "Batch"
@classmethod
def parse(cls, value: str):
"""Parse a string to a RunMode enum value.
:param value: The string to parse.
:type value: str
:return: The corresponding RunMode enum value.
:rtype: ~promptflow.contracts.run_mode.RunMode
:raises ValueError: If the value is not a valid string.
"""
if not isinstance(value, str):
raise ValueError(f"Invalid value type to parse: {type(value)}")
if value == "SingleNode":
return RunMode.SingleNode
elif value == "Batch":
return RunMode.Batch
else:
return RunMode.Test
| promptflow/src/promptflow/promptflow/contracts/run_mode.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/contracts/run_mode.py",
"repo_id": "promptflow",
"token_count": 374
} | 26 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from dataclasses import dataclass
from typing import Any, Dict, Mapping
from promptflow.contracts.run_info import FlowRunInfo, RunInfo
@dataclass
class LineResult:
"""The result of a line process."""
output: Mapping[str, Any] # The output of the line.
# The node output values to be used as aggregation inputs, if no aggregation node, it will be empty.
aggregation_inputs: Mapping[str, Any]
run_info: FlowRunInfo # The run info of the line.
node_run_infos: Mapping[str, RunInfo] # The run info of the nodes in the line.
@staticmethod
def deserialize(data: dict) -> "LineResult":
"""Deserialize the LineResult from a dict."""
return LineResult(
output=data.get("output"),
aggregation_inputs=data.get("aggregation_inputs", {}),
run_info=FlowRunInfo.deserialize(data.get("run_info")),
node_run_infos={k: RunInfo.deserialize(v) for k, v in data.get("node_run_infos", {}).items()},
)
@dataclass
class AggregationResult:
"""The result when running aggregation nodes in the flow."""
output: Mapping[str, Any] # The output of the aggregation nodes in the flow.
metrics: Dict[str, Any] # The metrics generated by the aggregation.
node_run_infos: Mapping[str, RunInfo] # The run info of the aggregation nodes.
@staticmethod
def deserialize(data: dict) -> "AggregationResult":
"""Deserialize the AggregationResult from a dict."""
return AggregationResult(
output=data.get("output", None),
metrics=data.get("metrics", None),
node_run_infos={k: RunInfo.deserialize(v) for k, v in data.get("node_run_infos", {}).items()},
)
| promptflow/src/promptflow/promptflow/executor/_result.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_result.py",
"repo_id": "promptflow",
"token_count": 669
} | 27 |
{
"name": "Promptflow-Python39",
// "context" is the path that the Codespaces docker build command should be run from, relative to devcontainer.json
"context": ".",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-toolsai.vscode-ai",
"ms-toolsai.jupyter",
"redhat.vscode-yaml",
"prompt-flow.prompt-flow"
],
"runArgs": ["-v", "/var/run/docker.sock:/var/run/docker.sock"]
}
| promptflow/.devcontainer/devcontainer.json/0 | {
"file_path": "promptflow/.devcontainer/devcontainer.json",
"repo_id": "promptflow",
"token_count": 230
} | 0 |
# Consume connections from Azure AI
For a smooth development flow that transitions from cloud (Azure AI) to local environments, you can directly utilize the connection already established on the cloud by setting the connection provider to "Azure AI connections".
You can set the connection provider using the following steps:
1. Navigate to the connection list in the VS Code primary sidebar.
1. Click on the ... (more options icon) at the top and select the `Set connection provider` option.
![img](../../media/cloud/consume-cloud-connections/set-connection-provider.png)
1. Choose one of the "Azure AI connections" provider types that you wish to use. [Click to learn more about the differences between the connection providers](#different-connection-providers).
![img](../../media/cloud/consume-cloud-connections/set-connection-provider-2.png)
1. If you choose "Azure AI Connections - for current working directory", then you need to specify the cloud resources in the `config.json` file within the project folder.
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider.png)
1. If you choose "Azure AI Connections - for this machine", specify the cloud resources in the connection string. You can do this in one of two ways:
(1) Input connection string in the input box above.
For example `azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>`
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider-2.png)
(2) Follow the wizard to set up your config step by step.
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider-2-wizard.png)
1. Once the connection provider is set, the connection list will automatically refresh, displaying the connections retrieved from the selected provider.
Note:
1. You need to have a project folder open to use the "Azure AI connections - for current working directory" option.
1. Once you change the connection provider, it will stay that way until you change it again and save the new setting.
## Different connection providers
Currently, we support three types of connections:
|Connection provider|Type|Description|Provider Specification|Use Case|
|---|---|---|---|---|
| Local Connections| Local| Enables consume the connections created and locally and stored in local sqlite. |NA| Ideal when connections need to be stored and managed locally.|
|Azure AI connection - For current working directory| Cloud provider| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project.| Specify the resource ID in a `config.json` file placed in the project folder. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#azureml)| A dynamic approach for consuming connections from different providers in specific projects. Allows for setting different provider configurations for different flows by updating the `config.json` in the project folder.|
|Azure AI connection - For this machine| Cloud| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project. | Use a `connection string` to specify a cloud resource as the provider on your local machine. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#full-azure-machine-learning-workspace-resource-id)|A global provider setting that applies across all working directories on your machine.|
## Next steps
- Set global configs on [connection.provider](../../how-to-guides/set-global-configs.md#connectionprovider).
- [Manage connections on local](../../how-to-guides/manage-connections.md).
| promptflow/docs/cloud/azureai/consume-connections-from-azure-ai.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/consume-connections-from-azure-ai.md",
"repo_id": "promptflow",
"token_count": 1002
} | 1 |
# Promptflow Reference Documentation Guide
## Overview
This guide describes how to author Python docstrings for promptflow public interfaces. See our doc site at [Promptflow API reference documentation](https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow.html).
## Principles
- **Coverage**: Every public object must have a docstring. For private objects, docstrings are encouraged but not required.
- **Style**: All docstrings should be written in [Sphinx style](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format) noting all types and if any exceptions are raised.
- **Relevance**: The documentation is up-to-date and relevant to the current version of the product.
- **Clarity**: The documentation is written in clear, concise language that is easy to understand.
- **Consistency**: The documentation has a consistent format and structure, making it easy to navigate and follow.
## How to write the docstring
First please read through [Sphinx style](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format) to have a basic understanding of sphinx style docstring.
### Write class docstring
Let's start with a class example:
```python
from typing import Dict, Optional, Union
from promptflow import PFClient
class MyClass:
"""One-line summary of the class.
More detailed explanation of the class. May include below notes, admonitions, code blocks.
.. note::
Here are some notes to show, with a nested python code block:
.. code-block:: python
from promptflow import MyClass, PFClient
obj = MyClass(PFClient())
.. admonition:: [Title of the admonition]
Here are some admonitions to show.
:param client: Descrition of the client.
:type client: ~promptflow.PFClient
:param param_int: Description of the parameter.
:type param_int: Optional[int]
:param param_str: Description of the parameter.
:type param_str: Optional[str]
:param param_dict: Description of the parameter.
:type param_dict: Optional[Dict[str, str]]
"""
def __init__(
client: PFClient,
param_int: Optional[int] = None,
param_str: Optional[str] = None,
param_dict: Optional[Dict[str, str]] = None,
) -> None:
"""No docstring for __init__, it should be written in class definition above."""
...
```
**Notes**:
1. One-line summary is required. It should be clear and concise.
2. Detailed explanation is encouraged but not required. This part may or may not include notes, admonitions and code blocks.
- The format like `.. note::` is called `directive`. Directives are a mechanism to extend the content of [reStructuredText](https://docutils.sourceforge.io/rst.html). Every directive declares a block of content with specific role. Start a new line with `.. directive_name::` to use the directive.
- The directives used in the sample(`note/admonition/code-block`) should be enough for basic usage of docstring in our project. But you are welcomed to explore more [Directives](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#specific-admonitions).
3. Parameter description and type is required.
- A pair of `:param [ParamName]:` and `:type [ParamName]:` is required.
- If the type is a promptflow public class, use the `full path to the class` and prepend it with a "~". This will create a link when the documentation is rendered on the doc site that will take the user to the class reference documentation for more information.
```text
:param client: Descrition of the client.
:type client: ~promptflow.PFClient
```
- Use `Union/Optional` when appropriate in function declaration. And use the same annotaion after `:type [ParamName]:`
```text
:type param_int: Optional[int]
```
4. For classes, include docstring in definition only. If you include a docstring in both the class definition and the constructor (init method) docstrings, it will show up twice in the reference docs.
5. Constructors (def `__init__`) should return `None`, per [PEP 484 standards](https://peps.python.org/pep-0484/#the-meaning-of-annotations).
6. To create a link for promptflow class on our doc site. `~promptflow.xxx.MyClass` alone only works after `:type [ParamName]` and `:rtype:`. If you want to achieve the same effect in docstring summary, you should use it with `:class:`:
```python
"""
An example to achieve link effect in summary for :class:`~promptflow.xxx.MyClass`
For function, use :meth:`~promptflow.xxx.my_func`
"""
```
7. There are some tricks to highlight the content in your docstring:
- Single backticks (`): Single backticks are used to represent inline code elements within the text. It is typically used to highlight function names, variable names, or any other code elements within the documentation.
- Double backticks(``): Double backticks are typically used to highlight a literal value.
8. If there are any class level constants you don't want to expose to doc site, make sure to add `_` in front of the constant to hide it.
### Write function docstring
```python
from typing import Optional
def my_method(param_int: Optional[int] = None) -> int:
"""One-line summary
Detailed explanations.
:param param_int: Description of the parameter.
:type param_int: int
:raises [ErrorType1]: [ErrorDescription1]
:raises [ErrorType2]: [ErrorDescription2]
:return: Description of the return value.
:rtype: int
"""
...
```
In addition to `class docstring` notes:
1. Function docstring should include return values.
- If return type is promptflow class, we should also use `~promptflow.xxx.[ClassName]`.
2. Function docstring should include exceptions that may be raised in this function.
- If exception type is `PromptflowException`, use `~promptflow.xxx.[ExceptionName]`
- If multiple exceptions are raised, just add new lines of `:raises`, see the example above.
## How to build doc site locally
You can build the documentation site locally to preview the final effect of your docstring on the rendered site. This will provide you with a clear understanding of how your docstring will appear on our site once your changes are merged into the main branch.
1. Setup your dev environment, see [dev_setup](./dev_setup.md) for details. Sphinx will load all source code to process docstring.
- Skip this step if you just want to build the doc site without reference doc, but do remove `-WithReferenceDoc` from the command in step 3.
2. Install `langchain` package since it is used in our code but not covered in `dev_setup`.
3. Open a `powershell`, activate the conda env and navigate to `<repo-root>/scripts/docs` , run `doc_generation.ps1`:
```pwsh
cd scripts\docs
.\doc_generation.ps1 -WithReferenceDoc -WarningAsError
```
- For the first time you execute this command, it will take some time to install `sphinx` dependencies. After the initial installation, next time you can add param `-SkipInstall` to above command to save some time for dependency check.
4. Check warnings/errors in the build log, fix them if any, then build again.
5. Open `scripts/docs/_build/index.html` to preview the local doc site.
## Additional comments
- **Utilities**: The [autoDocstring](https://marketplace.visualstudio.com/items?itemName=njpwerner.autodocstring) VSCode extension or GitHub Copilot can help autocomplete in this style for you.
- **Advanced principles**
- Accuracy: The documentation accurately reflects the features and functionality of the product.
- Completeness: The documentation covers all relevant features and functionality of the product.
- Demonstration: Every docstring should include an up-to-date code snippet that demonstrates how to use the product effectively.
## References
- [AzureML v2 Reference Documentation Guide](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/documentation_guidelines.md)
- [Azure SDK for Python documentation guidelines](https://azure.github.io/azure-sdk/python_documentation.html#docstrings)
- [How to document a Python API](https://review.learn.microsoft.com/en-us/help/onboard/admin/reference/python/documenting-api?branch=main) | promptflow/docs/dev/documentation_guidelines.md/0 | {
"file_path": "promptflow/docs/dev/documentation_guidelines.md",
"repo_id": "promptflow",
"token_count": 2474
} | 2 |
# Creating Cascading Tool Inputs
Cascading input settings are useful when the value of one input field determines which subsequent inputs are shown. This makes the input process more streamlined, user-friendly, and error-free. This guide will walk through how to create cascading inputs for your tools.
## Prerequisites
Please make sure you have the latest version of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) installed (v1.2.0+).
## Create a tool with cascading inputs
We'll build out an example tool to show how cascading inputs work. The `student_id` and `teacher_id` inputs will be controlled by the value selected for the `user_type` input. Here's how to configure this in the tool code and YAML.
1. Develop the tool function, following the [cascading inputs example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_cascading_inputs.py). Key points:
* Use the `@tool` decorator to mark the function as a tool.
* Define `UserType` as an Enum class, as it accepts only a specific set of fixed values in this example.
* Conditionally use inputs in the tool logic based on `user_type`.
```python
from enum import Enum
from promptflow import tool
class UserType(str, Enum):
STUDENT = "student"
TEACHER = "teacher"
@tool
def my_tool(user_type: Enum, student_id: str = "", teacher_id: str = "") -> str:
"""This is a dummy function to support cascading inputs.
:param user_type: user type, student or teacher.
:param student_id: student id.
:param teacher_id: teacher id.
:return: id of the user.
If user_type is student, return student_id.
If user_type is teacher, return teacher_id.
"""
if user_type == UserType.STUDENT:
return student_id
elif user_type == UserType.TEACHER:
return teacher_id
else:
raise Exception("Invalid user.")
```
2. Generate a starting YAML for your tool per the [tool package guide](create-and-use-tool-package.md), then update it to enable cascading:
Add `enabled_by` and `enabled_by_value` to control visibility of dependent inputs. See the [example YAML](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_cascading_inputs.yaml) for reference.
* The `enabled_by` attribute specifies the input field, which must be an enum type, that controls the visibility of the dependent input field.
* The `enabled_by_value` attribute defines the accepted enum values from the `enabled_by` field that will make this dependent input field visible.
> Note: `enabled_by_value` takes a list, allowing multiple values to enable an input.
```yaml
my_tool_package.tools.tool_with_cascading_inputs.my_tool:
function: my_tool
inputs:
user_type:
type:
- string
enum:
- student
- teacher
student_id:
type:
- string
# This input is enabled by the input "user_type".
enabled_by: user_type
# This input is enabled when "user_type" is "student".
enabled_by_value: [student]
teacher_id:
type:
- string
enabled_by: user_type
enabled_by_value: [teacher]
module: my_tool_package.tools.tool_with_cascading_inputs
name: My Tool with Cascading Inputs
description: This is my tool with cascading inputs
type: python
```
## Use the tool in VS Code
Once you package and share your tool, you can use it in VS Code per the [tool package guide](create-and-use-tool-package.md). We have a [demo flow](https://github.com/microsoft/promptflow/tree/main/examples/tools/use-cases/cascading-inputs-tool-showcase) you can try.
Before selecting a `user_type`, the `student_id` and `teacher_id` inputs are hidden. Once you pick the `user_type`, the corresponding input appears.
![before_user_type_selected.png](../../media/how-to-guides/develop-a-tool/before_user_type_selected.png)
![after_user_type_selected_with_student.png](../../media/how-to-guides/develop-a-tool/after_user_type_selected_with_student.png)
![after_user_type_selected_with_teacher.png](../../media/how-to-guides/develop-a-tool/after_user_type_selected_with_teacher.png)
## FAQs
### How do I create multi-layer cascading inputs?
If you are dealing with multiple levels of cascading inputs, you can effectively manage the dependencies between them by using the `enabled_by` and `enabled_by_value` attributes. For example:
```yaml
my_tool_package.tools.tool_with_multi_layer_cascading_inputs.my_tool:
function: my_tool
inputs:
event_type:
type:
- string
enum:
- corporate
- private
corporate_theme:
type:
- string
# This input is enabled by the input "event_type".
enabled_by: event_type
# This input is enabled when "event_type" is "corporate".
enabled_by_value: [corporate]
enum:
- seminar
- team_building
seminar_location:
type:
- string
# This input is enabled by the input "corporate_theme".
enabled_by: corporate_theme
# This input is enabled when "corporate_theme" is "seminar".
enabled_by_value: [seminar]
private_theme:
type:
- string
# This input is enabled by the input "event_type".
enabled_by: event_type
# This input is enabled when "event_type" is "private".
enabled_by_value: [private]
module: my_tool_package.tools.tool_with_multi_layer_cascading_inputs
name: My Tool with Multi-Layer Cascading Inputs
description: This is my tool with multi-layer cascading inputs
type: python
```
Inputs will be enabled in a cascading way based on selections. | promptflow/docs/how-to-guides/develop-a-tool/create-cascading-tool-inputs.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/create-cascading-tool-inputs.md",
"repo_id": "promptflow",
"token_count": 1955
} | 3 |
# Use column mapping
In this document, we will introduce how to map inputs with column mapping when running a flow.
## Column mapping introduction
Column mapping is a mapping from flow input name to specified values.
If specified, the flow will be executed with provided value for specified inputs.
The following types of values in column mapping are supported:
- `${data.<column_name>}` to reference from your test dataset.
- `${run.inputs.<input_name>}` to reference from referenced run's input. **Note**: this only supported when `--run` is provided for `pf run`.
- `${run.outputs.<output_name>}` to reference from referenced run's output. **Note**: this only supported when `--run` is provided for `pf run`.
- `STATIC_VALUE` to create static value for all lines for specified column.
## Flow inputs override priority
Flow input values are overridden according to the following priority:
"specified in column mapping" > "default value" > "same name column in provided data".
For example, if we have a flow with following inputs:
```yaml
inputs:
input1:
type: string
default: "default_val1"
input2:
type: string
default: "default_val2"
input3:
type: string
input4:
type: string
...
```
And the flow will return each inputs in outputs.
With the following data
```json
{"input3": "val3_in_data", "input4": "val4_in_data"}
```
And use the following YAML to run
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: path/to/flow
# my_flow has default value val2 for key2
data: path/to/data
# my_data has column key3 with value val3
column_mapping:
input1: "val1_in_column_mapping"
input3: ${data.input3}
```
Since the flow will return each inputs in output, we can get the actual inputs from `outputs.output` field in run details:
![column_mapping_details](../../media/column_mapping_details.png)
- Input "input1" has value "val1_in_column_mapping" since it's specified as constance in `column_mapping`.
- Input "input2" has value "default_val2" since it used default value in flow dag.
- Input "input3" has value "val3_in_data" since it's specified as data reference in `column_mapping`.
- Input "input4" has value "val4_in_data" since it has same name column in provided data.
| promptflow/docs/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.md/0 | {
"file_path": "promptflow/docs/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.md",
"repo_id": "promptflow",
"token_count": 697
} | 4 |
# Faiss Index Lookup
Faiss Index Lookup is a tool tailored for querying within a user-provided Faiss-based vector store. In combination with our Large Language Model (LLM) tool, it empowers users to extract contextually relevant information from a domain knowledge base.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users, if your index is stored in local path,
`pip install promptflow-vectordb`
if your index is stored in Azure storage,
`pip install promptflow-vectordb[azure]`
## Prerequisites
### For AzureML users,
- step 1. Prepare an accessible path on Azure Blob Storage. Here's the guide if a new storage account needs to be created: [Azure Storage Account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal).
- step 2. Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our promptflow-vectordb SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to the instructions of <a href="https://aka.ms/pf-sample-build-faiss-index" target="_blank">An example code for creating Faiss index</a> for building index using promptflow-vectordb SDK.
- step 3. Based on where you put your own index files, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps):
| Location | Role |
| ---- | ---- |
| workspace datastores or workspace default blob | AzureML Data Scientist |
| other blobs | Storage Blob Data Reader |
### For local users,
- Create Faiss-based index files in local path by only doing step 2 above.
## Inputs
The tool accepts the following inputs:
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| path | string | URL or path for the vector store.<br><br>local path (for local users):<br>`<local_path_to_the_index_folder>`<br><br> Azure blob URL format (with [azure] extra installed):<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML datastore URL format (with [azure] extra installed):<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>`<br><br>public http/https URL (for public demonstration):<br>http(s)://`<path_and_folder_name>` | Yes |
| vector | list[float] | The target vector to be queried, which can be generated by the LLM tool. | Yes |
| top_k | integer | The count of top-scored entities to return. Default value is 3. | No |
## Outputs
The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our promptflow-vectordb SDK. For the Faiss Index Search, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| text | string | Text of the entity |
| score | float | Distance between the entity and the query vector |
| metadata | dict | Customized key-value pairs provided by user when create the index |
<details>
<summary>Output</summary>
```json
[
{
"metadata": {
"link": "http://sample_link_0",
"title": "title0"
},
"original_entity": null,
"score": 0,
"text": "sample text #0",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_1",
"title": "title1"
},
"original_entity": null,
"score": 0.05000000447034836,
"text": "sample text #1",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_2",
"title": "title2"
},
"original_entity": null,
"score": 0.20000001788139343,
"text": "sample text #2",
"vector": null
}
]
```
</details> | promptflow/docs/reference/tools-reference/faiss_index_lookup_tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/faiss_index_lookup_tool.md",
"repo_id": "promptflow",
"token_count": 1307
} | 5 |
from .aoai import AzureOpenAI # noqa: F401
from .openai import OpenAI # noqa: F401
from .serpapi import SerpAPI # noqa: F401
| promptflow/src/promptflow-tools/promptflow/tools/__init__.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/__init__.py",
"repo_id": "promptflow",
"token_count": 48
} | 6 |
promptflow.tools.open_model_llm.OpenModelLLM.call:
name: Open Model LLM
description: Use an open model from the Azure Model catalog, deployed to an AzureML Online Endpoint for LLM Chat or Completion API calls.
icon: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==
type: custom_llm
module: promptflow.tools.open_model_llm
class_name: OpenModelLLM
function: call
inputs:
endpoint_name:
type:
- string
dynamic_list:
func_path: promptflow.tools.open_model_llm.list_endpoint_names
allow_manual_entry: true # Allow the user to clear this field
is_multi_select: false
deployment_name:
default: ''
type:
- string
dynamic_list:
func_path: promptflow.tools.open_model_llm.list_deployment_names
func_kwargs:
- name: endpoint
type:
- string
optional: true
reference: ${inputs.endpoint}
allow_manual_entry: true
is_multi_select: false
api:
enum:
- chat
- completion
type:
- string
temperature:
default: 1.0
type:
- double
max_new_tokens:
default: 500
type:
- int
top_p:
default: 1.0
advanced: true
type:
- double
model_kwargs:
default: "{}"
advanced: true
type:
- object
| promptflow/src/promptflow-tools/promptflow/tools/yamls/open_model_llm.yaml/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/yamls/open_model_llm.yaml",
"repo_id": "promptflow",
"token_count": 1312
} | 7 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import functools
import json
from typing import Dict, List, Optional
from promptflow._cli._params import (
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
add_param_max_results,
add_param_output,
add_param_output_format,
add_param_overwrite,
add_param_run_name,
add_param_set,
base_params,
)
from promptflow._cli._pf._run import _parse_metadata_args, add_run_create_common, create_run
from promptflow._cli._pf_azure._utils import _get_azure_pf_client
from promptflow._cli._utils import (
_output_result_list_with_format,
_set_workspace_argument_for_subparsers,
activate_action,
exception_handler,
pretty_print_dataframe_as_table,
)
from promptflow._sdk._constants import MAX_SHOW_DETAILS_RESULTS, ListViewType
from promptflow._sdk._errors import InvalidRunStatusError
from promptflow._sdk._utils import print_red_error
from promptflow.azure._restclient.flow_service_caller import FlowRequestException
def add_parser_run(subparsers):
"""Add run parser to the pfazure subparsers."""
run_parser = subparsers.add_parser(
"run", description="A CLI tool to manage cloud runs for prompt flow.", help="Manage prompt flow runs."
)
subparsers = run_parser.add_subparsers()
add_run_create_cloud(subparsers)
add_parser_run_list(subparsers)
add_parser_run_stream(subparsers)
add_parser_run_show(subparsers)
add_parser_run_show_details(subparsers)
add_parser_run_show_metrics(subparsers)
add_parser_run_cancel(subparsers)
add_parser_run_visualize(subparsers)
add_parser_run_archive(subparsers)
add_parser_run_restore(subparsers)
add_parser_run_update(subparsers)
add_parser_run_download(subparsers)
run_parser.set_defaults(action="run")
def add_run_create_cloud(subparsers):
epilog = """
Example:
# Create a run with YAML file:
pfazure run create -f <yaml-filename>
# Create a run from flow directory and reference a run:
pfazure run create --flow <path-to-flow-directory> --data <path-to-data-file> --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run <run-name> --variant "${summarize_text_content.variant_0}" --stream
# Create a run from existing workspace flow
pfazure run create --flow azureml:<flow-name> --data <path-to-data-file> --column-mapping <key-value-pair>
# Create a run from existing registry flow
pfazure run create --flow azureml://registries/<registry-name>/models/<flow-name>/versions/<version> --data <path-to-data-file> --column-mapping <key-value-pair>
""" # noqa: E501
def add_param_data(parser):
# cloud pf can also accept remote data
parser.add_argument(
"--data", type=str, help="Local path to the data file or remote data. e.g. azureml:name:version."
)
add_param_runtime = lambda parser: parser.add_argument("--runtime", type=str, help=argparse.SUPPRESS) # noqa: E731
add_param_reset = lambda parser: parser.add_argument( # noqa: E731
"--reset-runtime", action="store_true", help=argparse.SUPPRESS
)
add_run_create_common(
subparsers,
[add_param_data, add_param_runtime, add_param_reset, _set_workspace_argument_for_subparsers],
epilog=epilog,
)
def add_parser_run_list(subparsers):
"""Add run list parser to the pfazure subparsers."""
epilog = """
Examples:
# List runs status:
pfazure run list
# List most recent 10 runs status:
pfazure run list --max-results 10
# List active and archived runs status:
pfazure run list --include-archived
# List archived runs status only:
pfazure run list --archived-only
# List all runs status as table:
pfazure run list --output table
"""
add_params = [
add_param_max_results,
add_param_archived_only,
add_param_include_archived,
add_param_output_format,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="list",
description="A CLI tool to List all runs.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="List runs in a workspace.",
action_param_name="sub_action",
)
def add_parser_run_stream(subparsers):
"""Add run stream parser to the pfazure subparsers."""
epilog = """
Example:
# Stream run logs:
pfazure run stream --name <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="stream",
description="A CLI tool to stream run logs to the console.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Stream run logs to the console.",
action_param_name="sub_action",
)
def add_parser_run_show(subparsers):
"""Add run show parser to the pfazure subparsers."""
epilog = """
Example:
# Show the status of a run:
pfazure run show --name <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="show",
description="A CLI tool to show a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Show a run.",
action_param_name="sub_action",
)
def add_parser_run_show_details(subparsers):
"""Add run show details parser to the pfazure subparsers."""
epilog = """
Example:
# View input(s) and output(s) of a run:
pfazure run show-details --name <name>
"""
add_param_max_results = lambda parser: parser.add_argument( # noqa: E731
"-r",
"--max-results",
dest="max_results",
type=int,
default=MAX_SHOW_DETAILS_RESULTS,
help=f"Number of lines to show. Default is {MAX_SHOW_DETAILS_RESULTS}.",
)
add_params = [
add_param_max_results,
add_param_run_name,
add_param_all_results,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="show-details",
description="A CLI tool to show a run details.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Show a run details.",
action_param_name="sub_action",
)
def add_parser_run_show_metrics(subparsers):
"""Add run show metrics parser to the pfazure subparsers."""
epilog = """
Example:
# View metrics of a run:
pfazure run show-metrics --name <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="show-metrics",
description="A CLI tool to show run metrics.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Show run metrics.",
action_param_name="sub_action",
)
def add_parser_run_cancel(subparsers):
"""Add run cancel parser to the pfazure subparsers."""
epilog = """
Example:
# Cancel a run:
pfazure run cancel --name <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="cancel",
description="A CLI tool to cancel a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Cancel a run.",
action_param_name="sub_action",
)
def add_parser_run_visualize(subparsers):
"""Add run visualize parser to the pfazure subparsers."""
epilog = """
Examples:
# Visualize a run:
pfazure run visualize -n <name>
# Visualize runs:
pfazure run visualize --names "<name1,name2>"
pfazure run visualize --names "<name1>, <name2>"
"""
add_param_name = lambda parser: parser.add_argument( # noqa: E731
"-n", "--names", type=str, required=True, help="Name of the runs, comma separated."
)
add_param_html_path = lambda parser: parser.add_argument( # noqa: E731
"--html-path", type=str, default=None, help=argparse.SUPPRESS
)
add_params = [
add_param_name,
add_param_html_path,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="visualize",
description="A CLI tool to visualize a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Visualize a run.",
action_param_name="sub_action",
)
def add_parser_run_archive(subparsers):
"""Add run archive parser to the pfazure subparsers."""
epilog = """
Examples:
# Archive a run:
pfazure run archive -n <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="archive",
description="A CLI tool to archive a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Archive a run.",
action_param_name="sub_action",
)
def add_parser_run_restore(subparsers):
"""Add run restore parser to the pfazure subparsers."""
epilog = """
Examples:
# Restore an archived run:
pfazure run restore -n <name>
"""
add_params = [
add_param_run_name,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="restore",
description="A CLI tool to restore a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Restore a run.",
action_param_name="sub_action",
)
def add_parser_run_update(subparsers):
"""Add run update parser to the pfazure subparsers."""
epilog = """
Example:
# Update a run metadata:
pfazure run update --name <name> --set display_name="<display-name>" description="<description>" tags.key="<value>"
"""
add_params = [
add_param_run_name,
add_param_set,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="update",
description="A CLI tool to update a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Update a run.",
action_param_name="sub_action",
)
def add_parser_run_download(subparsers):
"""Add run download parser to the pfazure subparsers."""
epilog = """
Example:
# Download a run data to local:
pfazure run download --name <name> --output <output-folder-path>
"""
add_params = [
add_param_run_name,
add_param_output,
add_param_overwrite,
_set_workspace_argument_for_subparsers,
] + base_params
activate_action(
name="download",
description="A CLI tool to download a run.",
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Download a run.",
action_param_name="sub_action",
)
def dispatch_run_commands(args: argparse.Namespace):
if args.sub_action == "create":
pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug)
create_run(
create_func=functools.partial(
pf.runs.create_or_update, runtime=args.runtime, reset_runtime=args.reset_runtime
),
args=args,
)
elif args.sub_action == "list":
list_runs(
args.subscription,
args.resource_group,
args.workspace_name,
args.max_results,
args.archived_only,
args.include_archived,
args.output,
)
elif args.sub_action == "show":
show_run(args.subscription, args.resource_group, args.workspace_name, args.name)
elif args.sub_action == "show-details":
show_run_details(
args.subscription,
args.resource_group,
args.workspace_name,
args.name,
args.max_results,
args.all_results,
args.debug,
)
elif args.sub_action == "show-metrics":
show_metrics(args.subscription, args.resource_group, args.workspace_name, args.name)
elif args.sub_action == "stream":
stream_run(args.subscription, args.resource_group, args.workspace_name, args.name, args.debug)
elif args.sub_action == "visualize":
visualize(
args.subscription,
args.resource_group,
args.workspace_name,
args.names,
args.html_path,
args.debug,
)
elif args.sub_action == "archive":
archive_run(args.subscription, args.resource_group, args.workspace_name, args.name)
elif args.sub_action == "restore":
restore_run(args.subscription, args.resource_group, args.workspace_name, args.name)
elif args.sub_action == "update":
update_run(args.subscription, args.resource_group, args.workspace_name, args.name, params=args.params_override)
elif args.sub_action == "download":
download_run(args)
elif args.sub_action == "cancel":
cancel_run(args)
@exception_handler("List runs")
def list_runs(
subscription_id,
resource_group,
workspace_name,
max_results,
archived_only,
include_archived,
output,
):
"""List all runs from cloud."""
if max_results < 1:
raise ValueError(f"'max_results' must be a positive integer, got {max_results!r}")
# Default list_view_type is ACTIVE_ONLY
if archived_only and include_archived:
raise ValueError("Cannot specify both 'archived_only' and 'include_archived'")
list_view_type = ListViewType.ACTIVE_ONLY
if archived_only:
list_view_type = ListViewType.ARCHIVED_ONLY
if include_archived:
list_view_type = ListViewType.ALL
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
runs = pf.runs.list(max_results=max_results, list_view_type=list_view_type)
# hide additional info, debug info and properties in run list for better user experience
run_list = [
run._to_dict(exclude_additional_info=True, exclude_debug_info=True, exclude_properties=True) for run in runs
]
_output_result_list_with_format(result_list=run_list, output_format=output)
return runs
@exception_handler("Show run")
def show_run(subscription_id, resource_group, workspace_name, run_name):
"""Show a run from cloud."""
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
run = pf.runs.get(run=run_name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Show run details")
def show_run_details(subscription_id, resource_group, workspace_name, run_name, max_results, all_results, debug=False):
"""Show a run details from cloud."""
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name, debug=debug)
details = pf.runs.get_details(run=run_name, max_results=max_results, all_results=all_results)
details.fillna(value="(Failed)", inplace=True) # replace nan with explicit prompt
pretty_print_dataframe_as_table(details)
@exception_handler("Show run metrics")
def show_metrics(subscription_id, resource_group, workspace_name, run_name):
"""Show run metrics from cloud."""
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
metrics = pf.runs.get_metrics(run=run_name)
print(json.dumps(metrics, indent=4))
@exception_handler("Stream run")
def stream_run(subscription_id, resource_group, workspace_name, run_name, debug=False):
"""Stream run logs from cloud."""
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name, debug=debug)
run = pf.runs.stream(run_name)
print("\n")
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Visualize run")
def visualize(
subscription_id: str,
resource_group: str,
workspace_name: str,
names: str,
html_path: Optional[str] = None,
debug: bool = False,
):
run_names = [name.strip() for name in names.split(",")]
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name, debug=debug)
try:
pf.runs.visualize(run_names, html_path=html_path)
except FlowRequestException as e:
error_message = f"Visualize failed, request service error: {str(e)}"
print_red_error(error_message)
except InvalidRunStatusError as e:
error_message = f"Visualize failed: {str(e)}"
print_red_error(error_message)
@exception_handler("Archive run")
def archive_run(
subscription_id: str,
resource_group: str,
workspace_name: str,
run_name: str,
):
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
run = pf.runs.archive(run=run_name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Restore run")
def restore_run(
subscription_id: str,
resource_group: str,
workspace_name: str,
run_name: str,
):
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
run = pf.runs.restore(run=run_name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Update run")
def update_run(
subscription_id: str,
resource_group: str,
workspace_name: str,
run_name: str,
params: List[Dict[str, str]],
):
# params_override can have multiple items when user specifies with
# `--set key1=value1 key2=value`
# so we need to merge them first.
display_name, description, tags = _parse_metadata_args(params)
pf = _get_azure_pf_client(subscription_id, resource_group, workspace_name)
run = pf.runs.update(run=run_name, display_name=display_name, description=description, tags=tags)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Download run")
def download_run(args: argparse.Namespace):
pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug)
pf.runs.download(run=args.name, output=args.output, overwrite=args.overwrite)
@exception_handler("Cancel run")
def cancel_run(args: argparse.Namespace):
pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug)
pf.runs.cancel(run=args.name)
| promptflow/src/promptflow/promptflow/_cli/_pf_azure/_run.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf_azure/_run.py",
"repo_id": "promptflow",
"token_count": 7636
} | 8 |
{
"package": {},
"code": {
"line_process.py": {
"type": "python",
"inputs": {
"groundtruth": {
"type": [
"string"
]
},
"prediction": {
"type": [
"string"
]
}
},
"function": "line_process"
},
"aggregate.py": {
"type": "python",
"inputs": {
"processed_results": {
"type": [
"object"
]
}
},
"function": "aggregate"
}
}
}
| promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/.promptflow/flow.tools.json/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/.promptflow/flow.tools.json",
"repo_id": "promptflow",
"token_count": 328
} | 9 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
CONNECTION_NAME_PROPERTY = "__connection_name"
CONNECTION_SECRET_KEYS = "__secret_keys"
PROMPTFLOW_CONNECTIONS = "PROMPTFLOW_CONNECTIONS"
PROMPTFLOW_SECRETS_FILE = "PROMPTFLOW_SECRETS_FILE"
PF_NO_INTERACTIVE_LOGIN = "PF_NO_INTERACTIVE_LOGIN"
PF_LOGGING_LEVEL = "PF_LOGGING_LEVEL"
OPENAI_API_KEY = "openai-api-key"
BING_API_KEY = "bing-api-key"
AOAI_API_KEY = "aoai-api-key"
SERPAPI_API_KEY = "serpapi-api-key"
CONTENT_SAFETY_API_KEY = "content-safety-api-key"
ERROR_RESPONSE_COMPONENT_NAME = "promptflow"
EXTENSION_UA = "prompt-flow-extension"
LANGUAGE_KEY = "language"
DEFAULT_ENCODING = "utf-8"
# Constants related to execution
LINE_NUMBER_KEY = "line_number" # Using the same key with portal.
LINE_TIMEOUT_SEC = 600
class FlowLanguage:
"""The enum of tool source type."""
Python = "python"
CSharp = "csharp"
class AvailableIDE:
VS = "vs"
VS_CODE = "vsc"
USER_AGENT = "USER_AGENT"
PF_USER_AGENT = "PF_USER_AGENT"
CLI_PACKAGE_NAME = "promptflow"
CURRENT_VERSION = "current_version"
LATEST_VERSION = "latest_version"
LAST_HINT_TIME = "last_hint_time"
LAST_CHECK_TIME = "last_check_time"
PF_VERSION_CHECK = "pf_version_check.json"
HINT_INTERVAL_DAY = 7
GET_PYPI_INTERVAL_DAY = 7
_ENV_PF_INSTALLER = "PF_INSTALLER"
STREAMING_ANIMATION_TIME = 0.01
# trace related
OTEL_RESOURCE_SERVICE_NAME = "promptflow"
DEFAULT_SPAN_TYPE = "default"
class TraceEnvironmentVariableName:
EXPERIMENT = "PF_TRACE_EXPERIMENT"
SESSION_ID = "PF_TRACE_SESSION_ID"
class SpanFieldName:
NAME = "name"
CONTEXT = "context"
KIND = "kind"
PARENT_ID = "parent_id"
START_TIME = "start_time"
END_TIME = "end_time"
STATUS = "status"
ATTRIBUTES = "attributes"
EVENTS = "events"
LINKS = "links"
RESOURCE = "resource"
class SpanContextFieldName:
TRACE_ID = "trace_id"
SPAN_ID = "span_id"
TRACE_STATE = "trace_state"
class SpanStatusFieldName:
STATUS_CODE = "status_code"
class SpanAttributeFieldName:
FRAMEWORK = "framework"
SPAN_TYPE = "span_type"
FUNCTION = "function"
INPUTS = "inputs"
OUTPUT = "output"
SESSION_ID = "session_id"
PATH = "path"
FLOW_ID = "flow_id"
RUN = "run"
EXPERIMENT = "experiment"
LINE_RUN_ID = "line_run_id"
REFERENCED_LINE_RUN_ID = "referenced.line_run_id"
COMPLETION_TOKEN_COUNT = "__computed__.cumulative_token_count.completion"
PROMPT_TOKEN_COUNT = "__computed__.cumulative_token_count.prompt"
TOTAL_TOKEN_COUNT = "__computed__.cumulative_token_count.total"
class SpanResourceAttributesFieldName:
SERVICE_NAME = "service.name"
class SpanResourceFieldName:
ATTRIBUTES = "attributes"
SCHEMA_URL = "schema_url"
class ResourceAttributeFieldName:
EXPERIMENT_NAME = "experiment.name"
SERVICE_NAME = "service.name"
SESSION_ID = "session.id"
| promptflow/src/promptflow/promptflow/_constants.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_constants.py",
"repo_id": "promptflow",
"token_count": 1231
} | 10 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import importlib
import importlib.util
import inspect
import logging
import traceback
import types
from functools import partial
from pathlib import Path
from typing import Callable, Dict, List, Mapping, Optional, Tuple, Union
from promptflow._core._errors import (
InputTypeMismatch,
InvalidSource,
MissingRequiredInputs,
PackageToolNotFoundError,
ToolLoadError,
)
from promptflow._core.tool_meta_generator import (
_parse_tool_from_function,
collect_tool_function_in_module,
load_python_module_from_file,
)
from promptflow._utils.connection_utils import (
generate_custom_strong_type_connection_spec,
generate_custom_strong_type_connection_template,
)
from promptflow._utils.tool_utils import (
_DEPRECATED_TOOLS,
DynamicListError,
RetrieveToolFuncResultError,
_find_deprecated_tools,
append_workspace_triple_to_func_input_params,
function_to_tool_definition,
get_prompt_param_name_from_func,
load_function_from_function_path,
validate_dynamic_list_func_response_type,
validate_tool_func_result,
assign_tool_input_index_for_ux_order_if_needed,
)
from promptflow._utils.yaml_utils import load_yaml
from promptflow.contracts.flow import InputAssignment, InputValueType, Node, ToolSourceType
from promptflow.contracts.tool import ConnectionType, Tool, ToolType
from promptflow.exceptions import ErrorTarget, SystemErrorException, UserErrorException, ValidationException
module_logger = logging.getLogger(__name__)
PACKAGE_TOOLS_ENTRY = "package_tools"
def collect_tools_from_directory(base_dir) -> dict:
tools = {}
for f in Path(base_dir).glob("**/*.yaml"):
with open(f, "r") as f:
# The feature that automatically assigns indexes to inputs based on their order in the tool YAML,
# relying on the feature of ruamel.yaml that maintains key order when load YAML file.
# For more information on ruamel.yaml's feature, please
# visit https://yaml.readthedocs.io/en/latest/overview/#overview.
tools_in_file = load_yaml(f)
for identifier, tool in tools_in_file.items():
tools[identifier] = tool
return tools
def _get_entry_points_by_group(group):
# lazy load to improve performance for scenarios that don't need to load package tools
import importlib.metadata
# In python3.10 and later, the entry_points() method returns a SelectableView of EntryPoint objects,
# which allows us to select entry points by group. In the previous versions, the entry_points() method
# returns a dictionary-like object, we can use group name directly as a key.
entry_points = importlib.metadata.entry_points()
if isinstance(entry_points, list):
return entry_points.select(group=group)
else:
return entry_points.get(group, [])
def collect_package_tools(keys: Optional[List[str]] = None) -> dict:
"""Collect all tools from all installed packages."""
all_package_tools = {}
if keys is not None:
keys = set(keys)
entry_points = _get_entry_points_by_group(PACKAGE_TOOLS_ENTRY)
for entry_point in entry_points:
try:
list_tool_func = entry_point.load()
package_tools = list_tool_func()
for identifier, tool in package_tools.items():
# Only load required tools to avoid unnecessary loading when keys is provided
if isinstance(keys, set) and identifier not in keys:
# Support to collect new tool id if node source tool is a deprecated tool.
deprecated_tool_ids = tool.get(_DEPRECATED_TOOLS, [])
if not set(deprecated_tool_ids).intersection(keys):
continue
m = tool["module"]
importlib.import_module(m) # Import the module to make sure it is valid
tool["package"] = entry_point.dist.metadata["Name"]
tool["package_version"] = entry_point.dist.version
assign_tool_input_index_for_ux_order_if_needed(tool)
all_package_tools[identifier] = tool
except Exception as e:
msg = (
f"Failed to load tools from package {entry_point.dist.metadata['Name']}: {e},"
+ f" traceback: {traceback.format_exc()}"
)
module_logger.warning(msg)
return all_package_tools
def collect_package_tools_and_connections(keys: Optional[List[str]] = None) -> dict:
"""Collect all tools and custom strong type connections from all installed packages."""
all_package_tools = {}
all_package_connection_specs = {}
all_package_connection_templates = {}
if keys is not None:
keys = set(keys)
entry_points = _get_entry_points_by_group(PACKAGE_TOOLS_ENTRY)
for entry_point in entry_points:
try:
list_tool_func = entry_point.load()
package_tools = list_tool_func()
for identifier, tool in package_tools.items():
# Only load required tools to avoid unnecessary loading when keys is provided
if isinstance(keys, set) and identifier not in keys:
continue
m = tool["module"]
module = importlib.import_module(m) # Import the module to make sure it is valid
tool["package"] = entry_point.dist.metadata["Name"]
tool["package_version"] = entry_point.dist.version
assign_tool_input_index_for_ux_order_if_needed(tool)
all_package_tools[identifier] = tool
# Get custom strong type connection definition
custom_strong_type_connections_classes = [
obj
for name, obj in inspect.getmembers(module)
if inspect.isclass(obj)
and ConnectionType.is_custom_strong_type(obj)
and (not ConnectionType.is_connection_class_name(name))
]
if custom_strong_type_connections_classes:
for cls in custom_strong_type_connections_classes:
identifier = f"{cls.__module__}.{cls.__name__}"
connection_spec = generate_custom_strong_type_connection_spec(
cls, entry_point.dist.metadata["Name"], entry_point.dist.version
)
all_package_connection_specs[identifier] = connection_spec
all_package_connection_templates[identifier] = generate_custom_strong_type_connection_template(
cls, connection_spec, entry_point.dist.metadata["Name"], entry_point.dist.version
)
except Exception as e:
msg = (
f"Failed to load tools from package {entry_point.dist.metadata['Name']}: {e},"
+ f" traceback: {traceback.format_exc()}"
)
module_logger.warning(msg)
return all_package_tools, all_package_connection_specs, all_package_connection_templates
def retrieve_tool_func_result(
func_call_scenario: str, func_path: str, func_input_params_dict: Dict, ws_triple_dict: Dict[str, str] = {}
):
func = load_function_from_function_path(func_path)
# get param names from func signature.
func_sig_params = inspect.signature(func).parameters
module_logger.warning(f"func_sig_params of func_path is: '{func_sig_params}'")
module_logger.warning(f"func_input_params_dict is: '{func_input_params_dict}'")
# Append workspace triple to func input params if func signature has kwargs param.
# Or append ws_triple_dict params that are in func signature.
combined_func_input_params = append_workspace_triple_to_func_input_params(
func_sig_params, func_input_params_dict, ws_triple_dict
)
try:
result = func(**combined_func_input_params)
except Exception as e:
raise RetrieveToolFuncResultError(f"Error when calling function {func_path}: {e}")
validate_tool_func_result(func_call_scenario, result)
return result
def gen_dynamic_list(func_path: str, func_input_params_dict: Dict, ws_triple_dict: Dict[str, str] = {}):
func = load_function_from_function_path(func_path)
# get param names from func signature.
func_sig_params = inspect.signature(func).parameters
module_logger.warning(f"func_sig_params of func_path is: '{func_sig_params}'")
module_logger.warning(f"func_input_params_dict is: '{func_input_params_dict}'")
combined_func_input_params = append_workspace_triple_to_func_input_params(
func_sig_params, func_input_params_dict, ws_triple_dict
)
try:
result = func(**combined_func_input_params)
except Exception as e:
raise DynamicListError(f"Error when calling function {func_path}: {e}")
# validate response is of required format. Throw correct message if response is empty.
validate_dynamic_list_func_response_type(result, func.__name__)
return result
class BuiltinsManager:
def __init__(self) -> None:
pass
@staticmethod
def _load_llm_api(api_name: str) -> Tool:
result = apis.get(api_name)
if result is None:
raise APINotFound(
message=f"The API '{api_name}' is not found.",
target=ErrorTarget.EXECUTOR,
)
return result
def load_builtin(
self,
tool: Tool,
node_inputs: Optional[dict] = None,
) -> Tuple[Callable, dict]:
return BuiltinsManager._load_package_tool(tool.name, tool.module, tool.class_name, tool.function, node_inputs)
@staticmethod
def _load_package_tool(tool_name, module_name, class_name, method_name, node_inputs):
module = importlib.import_module(module_name)
return BuiltinsManager._load_tool_from_module(
module, tool_name, module_name, class_name, method_name, node_inputs
)
@staticmethod
def _load_tool_from_module(
module, tool_name, module_name, class_name, method_name, node_inputs: Mapping[str, InputAssignment]
):
"""Load tool from given module with node inputs."""
if class_name is None:
return getattr(module, method_name), {}
provider_class = getattr(module, class_name)
# Note: v -- type is InputAssignment
init_inputs = provider_class.get_initialize_inputs()
init_inputs_values = {}
for k, v in node_inputs.items():
if k not in init_inputs:
continue
if v.value_type != InputValueType.LITERAL:
raise InputTypeMismatch(
message_format=(
"Invalid input for '{tool_name}': Initialization input '{input_name}' requires a literal "
"value, but {input_value} was received."
),
tool_name=tool_name,
input_name=k,
input_value=v.serialize(),
target=ErrorTarget.EXECUTOR,
)
init_inputs_values[k] = v.value
missing_inputs = set(provider_class.get_required_initialize_inputs()) - set(init_inputs_values)
if missing_inputs:
raise MissingRequiredInputs(
message=f"Required inputs {list(missing_inputs)} are not provided for tool '{tool_name}'.",
target=ErrorTarget.EXECUTOR,
)
try:
api = getattr(provider_class(**init_inputs_values), method_name)
except Exception as ex:
error_type_and_message = f"({ex.__class__.__name__}) {ex}"
raise ToolLoadError(
module=module_name,
message_format="Failed to load package tool '{tool_name}': {error_type_and_message}",
tool_name=tool_name,
error_type_and_message=error_type_and_message,
) from ex
# Return the init_inputs to update node inputs in the afterward steps
return api, init_inputs
@staticmethod
def load_tool_by_api_name(api_name: str) -> Tool:
if api_name is None:
return None
return BuiltinsManager._load_llm_api(api_name)
def load_prompt_with_api(self, tool: Tool, api: Tool, node_inputs: Optional[dict] = None) -> Tuple[Callable, dict]:
"""Load a prompt template tool with action."""
# Load provider action function
api_func, init_inputs = self.load_builtin(api, node_inputs)
# Find the prompt template parameter name and parse tool code to it.
prompt_tpl_param_name = get_prompt_param_name_from_func(api_func)
api_func = partial(api_func, **{prompt_tpl_param_name: tool.code}) if prompt_tpl_param_name else api_func
# Return the init_inputs to update node inputs in the afterward steps
return api_func, init_inputs
def load_prompt_rendering(self, tool: Tool):
if not tool.code:
tool.code = ""
from promptflow.tools.template_rendering import render_template_jinja2
return partial(render_template_jinja2, template=tool.code)
@staticmethod
def parse_builtin_tool_method(tool: Tool) -> tuple:
module_name = tool.module
class_name = tool.class_name
method_name = tool.function
return module_name, class_name, method_name
@staticmethod
def is_builtin(tool: Tool) -> bool:
"""Check if the tool is a builtin tool."""
return tool.type == ToolType.PYTHON and tool.code is None and tool.source is None
@staticmethod
def is_llm(tool: Tool) -> bool:
"""Check if the tool is a LLM tool."""
return tool.type == ToolType.LLM
@staticmethod
def is_custom_python(tool: Tool) -> bool:
"""Check if the tool is a custom python tool."""
return tool.type == ToolType.PYTHON and not BuiltinsManager.is_builtin(tool)
class ToolsManager:
"""Manage all builtins and user-defined tools."""
def __init__(
self,
loaded_tools: Optional[Mapping[str, Callable]] = None,
) -> None:
loaded_tools = loaded_tools or {}
self._tools = {k: v for k, v in loaded_tools.items()}
def load_tools(self, tools: Mapping[str, Callable]) -> None:
"""Load new tools to the manager."""
self._tools.update(tools)
def loaded(self, tool: str) -> bool:
return tool in self._tools
def get_tool(self, key: str) -> Callable:
if key not in self._tools:
raise ValueError(f"Tool for {key} is not loaded")
return self._tools[key]
def wrap_tool(self, key: str, wrapper: Callable):
"""Wraps the tool with specific name by a given wrapper.
Sometimes we may want to wrap the tool with a decorator, but we don't want to modify the original tool.
i.e. We may want to pass additional arguments to the tool by wrapping it with a decorator,
such as turning on the stream response for AzureOpenAI.chat() by adding a "stream=True" argument.
"""
tool = self.get_tool(key)
self._tools.update({key: wrapper(tool)})
def assert_loaded(self, tool: str):
if tool not in self._tools:
raise ValueError(f"Tool {tool} is not loaded")
# TODO: Remove this method. The code path will not be used in code-first experience.
# Customers are familiar with the term "node", so we use it in error message.
@staticmethod
def _load_custom_tool(tool: Tool, node_name: str) -> Callable:
func_name = tool.function or tool.name
if tool.source and Path(tool.source).exists(): # If source file is provided, load the function from the file
m = load_python_module_from_file(tool.source)
if m is None:
raise CustomToolSourceLoadError(f"Cannot load module from source {tool.source} for node {node_name}.")
return getattr(m, func_name)
if not tool.code:
raise EmptyCodeInCustomTool(f"Missing code in node {node_name}.")
func_code = tool.code
try:
f_globals = {}
exec(func_code, f_globals)
except Exception as e:
raise CustomPythonToolLoadError(f"Error when loading code of node {node_name}: {e}") from e
if func_name not in f_globals:
raise MissingTargetFunction(f"Cannot find function {func_name} in the code of node {node_name}.")
return f_globals[func_name]
class ToolLoader:
def __init__(self, working_dir: str, package_tool_keys: Optional[List[str]] = None) -> None:
self._working_dir = working_dir
self._package_tools = collect_package_tools(package_tool_keys) if package_tool_keys else {}
# Used to handle backward compatibility of tool ID changes.
self._deprecated_tools = _find_deprecated_tools(self._package_tools)
# TODO: Replace NotImplementedError with NotSupported in the future.
def load_tool_for_node(self, node: Node) -> Tool:
if node.source is None:
raise UserErrorException(f"Node {node.name} does not have source defined.")
if node.type == ToolType.PYTHON:
if node.source.type == ToolSourceType.Package:
return self.load_tool_for_package_node(node)
elif node.source.type == ToolSourceType.Code:
_, tool = self.load_tool_for_script_node(node)
return tool
raise NotImplementedError(f"Tool source type {node.source.type} for python tool is not supported yet.")
elif node.type == ToolType.CUSTOM_LLM:
if node.source.type == ToolSourceType.PackageWithPrompt:
return self.load_tool_for_package_node(node)
raise NotImplementedError(f"Tool source type {node.source.type} for custom_llm tool is not supported yet.")
else:
raise NotImplementedError(f"Tool type {node.type} is not supported yet.")
def load_tool_for_package_node(self, node: Node) -> Tool:
if node.source.tool in self._package_tools:
return Tool.deserialize(self._package_tools[node.source.tool])
# If node source tool is not in package tools, try to find the tool ID in deprecated tools.
# If found, load the tool with the new tool ID for backward compatibility.
if node.source.tool in self._deprecated_tools:
new_tool_id = self._deprecated_tools[node.source.tool]
# Used to collect deprecated tool usage and warn user to replace the deprecated tool with the new one.
module_logger.warning(f"Tool ID '{node.source.tool}' is deprecated. Please use '{new_tool_id}' instead.")
return Tool.deserialize(self._package_tools[new_tool_id])
raise PackageToolNotFoundError(
f"Package tool '{node.source.tool}' is not found in the current environment. "
f"All available package tools are: {list(self._package_tools.keys())}.",
target=ErrorTarget.EXECUTOR,
)
def load_tool_for_script_node(self, node: Node) -> Tuple[types.ModuleType, Tool]:
if node.source.path is None:
raise InvalidSource(
target=ErrorTarget.EXECUTOR,
message_format="Load tool failed for node '{node_name}'. The source path is 'None'.",
node_name=node.name,
)
path = node.source.path
if not (self._working_dir / path).is_file():
raise InvalidSource(
target=ErrorTarget.EXECUTOR,
message_format="Load tool failed for node '{node_name}'. Tool file '{source_path}' can not be found.",
source_path=path,
node_name=node.name,
)
m = load_python_module_from_file(self._working_dir / path)
if m is None:
raise CustomToolSourceLoadError(f"Cannot load module from {path}.")
f, init_inputs = collect_tool_function_in_module(m)
return m, _parse_tool_from_function(f, init_inputs, gen_custom_type_conn=True)
def load_tool_for_llm_node(self, node: Node) -> Tool:
api_name = f"{node.provider}.{node.api}"
return BuiltinsManager._load_llm_api(api_name)
builtins = {}
apis = {}
connections = {}
connection_type_to_api_mapping = {}
def _register(provider_cls, collection, type):
from promptflow._core.tool import ToolProvider
if not issubclass(provider_cls, ToolProvider):
raise Exception(f"Class {provider_cls.__name__!r} must be a subclass of promptflow.ToolProvider.")
initialize_inputs = provider_cls.get_initialize_inputs()
# Build tool/provider definition
for name, value in provider_cls.__dict__.items():
if hasattr(value, "__original_function"):
name = value.__original_function.__qualname__
value.__tool = function_to_tool_definition(value, type=type, initialize_inputs=initialize_inputs)
collection[name] = value.__tool
module_logger.debug(f"Registered {name} as a builtin function")
# Get the connection type - provider name mapping for execution use
# Tools/Providers related connection must have been imported
for param in initialize_inputs.values():
if not param.annotation:
continue
annotation_type_name = param.annotation.__name__
if annotation_type_name in connections:
api_name = provider_cls.__name__
module_logger.debug(f"Add connection type {annotation_type_name} to api {api_name} mapping")
connection_type_to_api_mapping[annotation_type_name] = api_name
break
def _register_method(provider_method, collection, type):
name = provider_method.__qualname__
provider_method.__tool = function_to_tool_definition(provider_method, type=type)
collection[name] = provider_method.__tool
module_logger.debug(f"Registered {name} as {type} function")
def register_builtins(provider_cls):
_register(provider_cls, builtins, ToolType.PYTHON)
def register_apis(provider_cls):
_register(provider_cls, apis, ToolType._ACTION)
def register_builtin_method(provider_method):
_register_method(provider_method, builtins, ToolType.PYTHON)
def register_api_method(provider_method):
_register_method(provider_method, apis, ToolType._ACTION)
def register_connections(connection_classes: Union[type, List[type]]):
connection_classes = [connection_classes] if not isinstance(connection_classes, list) else connection_classes
connections.update({cls.__name__: cls for cls in connection_classes})
class CustomToolSourceLoadError(SystemErrorException):
pass
class CustomToolError(UserErrorException):
"""Base exception raised when failed to validate tool."""
def __init__(self, message):
super().__init__(message, target=ErrorTarget.TOOL)
class EmptyCodeInCustomTool(CustomToolError):
pass
class CustomPythonToolLoadError(CustomToolError):
pass
class MissingTargetFunction(CustomToolError):
pass
class APINotFound(ValidationException):
pass
| promptflow/src/promptflow/promptflow/_core/tools_manager.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/tools_manager.py",
"repo_id": "promptflow",
"token_count": 9491
} | 11 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import datetime
import os
from contextlib import contextmanager
from pathlib import Path
from typing import List, Union
from filelock import FileLock
from sqlalchemy import create_engine, event, inspect, text
from sqlalchemy.exc import OperationalError
from sqlalchemy.orm import Session, sessionmaker
from sqlalchemy.schema import CreateTable
from promptflow._sdk._configuration import Configuration
from promptflow._sdk._constants import (
CONNECTION_TABLE_NAME,
EXP_NODE_RUN_TABLE_NAME,
EXPERIMENT_CREATED_ON_INDEX_NAME,
EXPERIMENT_TABLE_NAME,
LOCAL_MGMT_DB_PATH,
LOCAL_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH,
ORCHESTRATOR_TABLE_NAME,
RUN_INFO_CREATED_ON_INDEX_NAME,
RUN_INFO_TABLENAME,
SCHEMA_INFO_TABLENAME,
SPAN_TABLENAME,
TRACE_MGMT_DB_PATH,
TRACE_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH,
)
from promptflow._sdk._utils import (
get_promptflow_sdk_version,
print_red_error,
print_yellow_warning,
use_customized_encryption_key,
)
# though we have removed the upper bound of SQLAlchemy version in setup.py
# still silence RemovedIn20Warning to avoid unexpected warning message printed to users
# for those who still use SQLAlchemy<2.0.0
os.environ["SQLALCHEMY_SILENCE_UBER_WARNING"] = "1"
session_maker = None
trace_session_maker = None
lock = FileLock(LOCAL_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH)
trace_lock = FileLock(TRACE_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH)
def support_transaction(engine):
# workaround to make SQLite support transaction; reference to SQLAlchemy doc:
# https://docs.sqlalchemy.org/en/20/dialects/sqlite.html#serializable-isolation-savepoints-transactional-ddl
@event.listens_for(engine, "connect")
def do_connect(db_api_connection, connection_record):
# disable pysqlite emitting of the BEGIN statement entirely.
# also stops it from emitting COMMIT before any DDL.
db_api_connection.isolation_level = None
@event.listens_for(engine, "begin")
def do_begin(conn):
# emit our own BEGIN
conn.exec_driver_sql("BEGIN")
return engine
def update_current_schema(engine, orm_class, tablename: str) -> None:
sql = f"REPLACE INTO {SCHEMA_INFO_TABLENAME} (tablename, version) VALUES (:tablename, :version);"
with engine.begin() as connection:
connection.execute(text(sql), {"tablename": tablename, "version": orm_class.__pf_schema_version__})
return
def mgmt_db_session() -> Session:
global session_maker
global lock
if session_maker is not None:
return session_maker()
lock.acquire()
try: # try-finally to always release lock
if session_maker is not None:
return session_maker()
if not LOCAL_MGMT_DB_PATH.parent.is_dir():
LOCAL_MGMT_DB_PATH.parent.mkdir(parents=True, exist_ok=True)
engine = create_engine(f"sqlite:///{str(LOCAL_MGMT_DB_PATH)}?check_same_thread=False", future=True)
engine = support_transaction(engine)
from promptflow._sdk._orm import Connection, Experiment, ExperimentNodeRun, Orchestrator, RunInfo
create_or_update_table(engine, orm_class=RunInfo, tablename=RUN_INFO_TABLENAME)
create_table_if_not_exists(engine, CONNECTION_TABLE_NAME, Connection)
create_table_if_not_exists(engine, ORCHESTRATOR_TABLE_NAME, Orchestrator)
create_table_if_not_exists(engine, EXP_NODE_RUN_TABLE_NAME, ExperimentNodeRun)
create_index_if_not_exists(engine, RUN_INFO_CREATED_ON_INDEX_NAME, RUN_INFO_TABLENAME, "created_on")
if Configuration.get_instance().is_internal_features_enabled():
create_or_update_table(engine, orm_class=Experiment, tablename=EXPERIMENT_TABLE_NAME)
create_index_if_not_exists(engine, EXPERIMENT_CREATED_ON_INDEX_NAME, EXPERIMENT_TABLE_NAME, "created_on")
session_maker = sessionmaker(bind=engine)
except Exception as e: # pylint: disable=broad-except
# if we cannot manage to create the connection to the management database
# we can barely do nothing but raise the exception with printing the error message
error_message = f"Failed to create management database: {str(e)}"
print_red_error(error_message)
raise
finally:
lock.release()
return session_maker()
def build_copy_sql(old_name: str, new_name: str, old_columns: List[str], new_columns: List[str]) -> str:
insert_stmt = f"INSERT INTO {new_name}"
# append some NULLs for new columns
columns = old_columns.copy() + ["NULL"] * (len(new_columns) - len(old_columns))
select_stmt = "SELECT " + ", ".join(columns) + f" FROM {old_name}"
sql = f"{insert_stmt} {select_stmt};"
return sql
def generate_legacy_tablename(engine, tablename: str) -> str:
try:
with engine.connect() as connection:
result = connection.execute(
text(f"SELECT version FROM {SCHEMA_INFO_TABLENAME} where tablename=(:tablename)"),
{"tablename": tablename},
).first()
current_schema_version = result[0]
except (OperationalError, TypeError):
# schema info table not exists(OperationalError) or no version for the table(TypeError)
# legacy tablename fallbacks to "v0_{timestamp}" - use timestamp to avoid duplication
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S_%f")
current_schema_version = f"0_{timestamp}"
return f"{tablename}_v{current_schema_version}"
def get_db_schema_version(engine, tablename: str) -> int:
try:
with engine.connect() as connection:
result = connection.execute(
text(f"SELECT version FROM {SCHEMA_INFO_TABLENAME} where tablename=(:tablename)"),
{"tablename": tablename},
).first()
return int(result[0])
except (OperationalError, TypeError):
# schema info table not exists(OperationalError) or no version for the table(TypeError)
# version fallbacks to 0
return 0
def create_or_update_table(engine, orm_class, tablename: str) -> None:
# create schema_info table if not exists
sql = f"CREATE TABLE IF NOT EXISTS {SCHEMA_INFO_TABLENAME} (tablename TEXT PRIMARY KEY, version TEXT NOT NULL);"
with engine.begin() as connection:
connection.execute(text(sql))
# no table in database yet
# create table via ORM class and update schema info
if not inspect(engine).has_table(tablename):
orm_class.metadata.create_all(engine)
update_current_schema(engine, orm_class, tablename)
return
db_schema_version = get_db_schema_version(engine, tablename)
sdk_schema_version = int(orm_class.__pf_schema_version__)
# same schema, no action needed
if db_schema_version == sdk_schema_version:
return
elif db_schema_version > sdk_schema_version:
# schema in database is later than SDK schema
# though different, we have design principal that later schema will always have no less columns
# while new columns should be nullable or with default value
# so that older schema can always use existing schema
# we print warning message but not do other action
warning_message = (
f"We have noticed that you are using an older SDK version: {get_promptflow_sdk_version()!r}.\n"
"While we will do our best to ensure compatibility, "
"we highly recommend upgrading to the latest version of SDK for the best experience."
)
print_yellow_warning(warning_message)
return
else:
# schema in database is older than SDK schema
# so we have to create table with new schema
# in this case, we need to:
# 1. rename existing table name
# 2. create table with current schema
# 3. copy data from renamed table to new table
legacy_tablename = generate_legacy_tablename(engine, tablename)
rename_sql = f"ALTER TABLE {tablename} RENAME TO {legacy_tablename};"
create_table_sql = str(CreateTable(orm_class.__table__).compile(engine))
copy_sql = build_copy_sql(
old_name=legacy_tablename,
new_name=tablename,
old_columns=[column["name"] for column in inspect(engine).get_columns(tablename)],
new_columns=[column.name for column in orm_class.__table__.columns],
)
# note that we should do above in one transaction
with engine.begin() as connection:
connection.execute(text(rename_sql))
connection.execute(text(create_table_sql))
connection.execute(text(copy_sql))
# update schema info finally
update_current_schema(engine, orm_class, tablename)
return
def create_table_if_not_exists(engine, table_name, orm_class) -> None:
if inspect(engine).has_table(table_name):
return
try:
if inspect(engine).has_table(table_name):
return
orm_class.metadata.create_all(engine)
except OperationalError as e:
# only ignore error if table already exists
expected_error_message = f"table {table_name} already exists"
if expected_error_message not in str(e):
raise
def create_index_if_not_exists(engine, index_name, table_name, col_name) -> None:
# created_on
sql = f"CREATE INDEX IF NOT EXISTS {index_name} ON {table_name} (f{col_name});"
with engine.begin() as connection:
connection.execute(text(sql))
return
@contextmanager
def mgmt_db_rebase(mgmt_db_path: Union[Path, os.PathLike, str], customized_encryption_key: str = None) -> Session:
"""
This function will change the constant LOCAL_MGMT_DB_PATH to the new path so very dangerous.
It is created for pf flow export only and need to be removed in further version.
"""
global session_maker
global LOCAL_MGMT_DB_PATH
origin_local_db_path = LOCAL_MGMT_DB_PATH
LOCAL_MGMT_DB_PATH = mgmt_db_path
session_maker = None
if customized_encryption_key:
with use_customized_encryption_key(customized_encryption_key):
yield
else:
yield
LOCAL_MGMT_DB_PATH = origin_local_db_path
session_maker = None
def trace_mgmt_db_session() -> Session:
global trace_session_maker
global trace_lock
if trace_session_maker is not None:
return trace_session_maker()
trace_lock.acquire()
try:
if trace_session_maker is not None:
return trace_session_maker()
if not TRACE_MGMT_DB_PATH.parent.is_dir():
TRACE_MGMT_DB_PATH.parent.mkdir(parents=True, exist_ok=True)
engine = create_engine(f"sqlite:///{str(TRACE_MGMT_DB_PATH)}", future=True)
engine = support_transaction(engine)
if not inspect(engine).has_table(SPAN_TABLENAME):
from promptflow._sdk._orm.trace import Span
Span.metadata.create_all(engine)
trace_session_maker = sessionmaker(bind=engine)
except Exception as e: # pylint: disable=broad-except
# if we cannot manage to create the connection to the OpenTelemetry management database
# we can barely do nothing but raise the exception with printing the error message
error_message = f"Failed to create OpenTelemetry management database: {str(e)}"
print_red_error(error_message)
raise
finally:
trace_lock.release()
return trace_session_maker()
| promptflow/src/promptflow/promptflow/_sdk/_orm/session.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_orm/session.py",
"repo_id": "promptflow",
"token_count": 4506
} | 12 |
packageName: Promptflow.Core.PfsClient
packageVersion: 0.0.1
targetFramework: netstandard2.0
optionalProjectFile: false
| promptflow/src/promptflow/promptflow/_sdk/_service/generator_configs/csharp.yaml/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/generator_configs/csharp.yaml",
"repo_id": "promptflow",
"token_count": 37
} | 13 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
from typing import Callable, Union
from promptflow import PFClient
from promptflow._constants import LINE_NUMBER_KEY
from promptflow._sdk._load_functions import load_flow
from promptflow._sdk._serving._errors import UnexpectedConnectionProviderReturn, UnsupportedConnectionProvider
from promptflow._sdk._serving.flow_result import FlowResult
from promptflow._sdk._serving.utils import validate_request_data
from promptflow._sdk._utils import (
dump_flow_result,
get_local_connections_from_executable,
override_connection_config_with_environment_variable,
resolve_connections_environment_variable_reference,
update_environment_variables_with_connections,
)
from promptflow._sdk.entities._connection import _Connection
from promptflow._sdk.entities._flow import Flow
from promptflow._sdk.operations._flow_operations import FlowOperations
from promptflow._utils.logger_utils import LoggerFactory
from promptflow._utils.multimedia_utils import convert_multimedia_data_to_base64, persist_multimedia_data
from promptflow.contracts.flow import Flow as ExecutableFlow
from promptflow.executor import FlowExecutor
from promptflow.storage._run_storage import DefaultRunStorage
class FlowInvoker:
"""
The invoker of a flow.
:param flow: The path of the flow, or the flow loaded by load_flow().
:type flow: [str, ~promptflow._sdk.entities._flow.Flow]
:param connection_provider: The connection provider, defaults to None
:type connection_provider: [str, Callable], optional
:param streaming: The function or bool to determine enable streaming or not, defaults to lambda: False
:type streaming: Union[Callable[[], bool], bool], optional
:param connections: Pre-resolved connections used when executing, defaults to None
:type connections: dict, optional
:param connections_name_overrides: The connection name overrides, defaults to None
Example: ``{"aoai_connection": "azure_open_ai_connection"}``
The node with reference to connection 'aoai_connection' will be resolved to the actual connection 'azure_open_ai_connection'. # noqa: E501
:type connections_name_overrides: dict, optional
:param raise_ex: Whether to raise exception when executing flow, defaults to True
:type raise_ex: bool, optional
"""
def __init__(
self,
flow: [str, Flow],
connection_provider: [str, Callable] = None,
streaming: Union[Callable[[], bool], bool] = False,
connections: dict = None,
connections_name_overrides: dict = None,
raise_ex: bool = True,
**kwargs,
):
self.logger = kwargs.get("logger", LoggerFactory.get_logger("flowinvoker"))
self.flow_entity = flow if isinstance(flow, Flow) else load_flow(source=flow)
self._executable_flow = ExecutableFlow._from_dict(
flow_dag=self.flow_entity._data, working_dir=self.flow_entity.code
)
self.connections = connections or {}
self.connections_name_overrides = connections_name_overrides or {}
self.raise_ex = raise_ex
self.storage = kwargs.get("storage", None)
self.streaming = streaming if isinstance(streaming, Callable) else lambda: streaming
# Pass dump_to path to dump flow result for extension.
self._dump_to = kwargs.get("dump_to", None)
# The credential is used as an option to override
# DefaultAzureCredential when using workspace connection provider
self._credential = kwargs.get("credential", None)
self._init_connections(connection_provider)
self._init_executor()
self.flow = self.executor._flow
self._dump_file_prefix = "chat" if self._is_chat_flow else "flow"
def _init_connections(self, connection_provider):
self._is_chat_flow, _, _ = FlowOperations._is_chat_flow(self._executable_flow)
connection_provider = "local" if connection_provider is None else connection_provider
if isinstance(connection_provider, str):
self.logger.info(f"Getting connections from pf client with provider {connection_provider}...")
connections_to_ignore = list(self.connections.keys())
connections_to_ignore.extend(self.connections_name_overrides.keys())
# Note: The connection here could be local or workspace, depends on the connection.provider in pf.yaml.
connections = get_local_connections_from_executable(
executable=self._executable_flow,
client=PFClient(config={"connection.provider": connection_provider}, credential=self._credential),
connections_to_ignore=connections_to_ignore,
# fetch connections with name override
connections_to_add=list(self.connections_name_overrides.values()),
)
# use original name for connection with name override
override_name_to_original_name_mapping = {v: k for k, v in self.connections_name_overrides.items()}
for name, conn in connections.items():
if name in override_name_to_original_name_mapping:
self.connections[override_name_to_original_name_mapping[name]] = conn
else:
self.connections[name] = conn
elif isinstance(connection_provider, Callable):
self.logger.info("Getting connections from custom connection provider...")
connection_list = connection_provider()
if not isinstance(connection_list, list):
raise UnexpectedConnectionProviderReturn(
f"Connection provider {connection_provider} should return a list of connections."
)
if any(not isinstance(item, _Connection) for item in connection_list):
raise UnexpectedConnectionProviderReturn(
f"All items returned by {connection_provider} should be connection type, got {connection_list}."
)
# TODO(2824058): support connection provider when executing function
connections = {item.name: item.to_execution_connection_dict() for item in connection_list}
self.connections.update(connections)
else:
raise UnsupportedConnectionProvider(connection_provider)
override_connection_config_with_environment_variable(self.connections)
resolve_connections_environment_variable_reference(self.connections)
update_environment_variables_with_connections(self.connections)
self.logger.info(f"Promptflow get connections successfully. keys: {self.connections.keys()}")
def _init_executor(self):
self.logger.info("Promptflow executor starts initializing...")
storage = None
if self._dump_to:
storage = DefaultRunStorage(base_dir=self._dump_to, sub_dir=Path(".promptflow/intermediate"))
else:
storage = self.storage
self.executor = FlowExecutor._create_from_flow(
flow=self._executable_flow,
working_dir=self.flow_entity.code,
connections=self.connections,
raise_ex=self.raise_ex,
storage=storage,
)
self.executor.enable_streaming_for_llm_flow(self.streaming)
self.logger.info("Promptflow executor initiated successfully.")
def _invoke(self, data: dict, run_id=None, disable_input_output_logging=False):
"""
Process a flow request in the runtime.
:param data: The request data dict with flow input as keys, for example: {"question": "What is ChatGPT?"}.
:type data: dict
:param run_id: The run id of the flow request, defaults to None
:type run_id: str, optional
:return: The result of executor.
:rtype: ~promptflow.executor._result.LineResult
"""
log_data = "<REDACTED>" if disable_input_output_logging else data
self.logger.info(f"Validating flow input with data {log_data!r}")
validate_request_data(self.flow, data)
self.logger.info(f"Execute flow with data {log_data!r}")
# Pass index 0 as extension require for dumped result.
# TODO: Remove this index after extension remove this requirement.
result = self.executor.exec_line(data, index=0, run_id=run_id, allow_generator_output=self.streaming())
if LINE_NUMBER_KEY in result.output:
# Remove line number from output
del result.output[LINE_NUMBER_KEY]
return result
def invoke(self, data: dict, run_id=None, disable_input_output_logging=False):
"""
Process a flow request in the runtime and return the output of the executor.
:param data: The request data dict with flow input as keys, for example: {"question": "What is ChatGPT?"}.
:type data: dict
:return: The flow output dict, for example: {"answer": "ChatGPT is a chatbot."}.
:rtype: dict
"""
result = self._invoke(data, run_id=run_id, disable_input_output_logging=disable_input_output_logging)
# Get base64 for multi modal object
resolved_outputs = self._convert_multimedia_data_to_base64(result)
self._dump_invoke_result(result)
log_outputs = "<REDACTED>" if disable_input_output_logging else result.output
self.logger.info(f"Flow run result: {log_outputs}")
if not self.raise_ex:
# If raise_ex is False, we will return the trace flow & node run info.
return FlowResult(
output=resolved_outputs or {},
run_info=result.run_info,
node_run_infos=result.node_run_infos,
)
return resolved_outputs
def _convert_multimedia_data_to_base64(self, invoke_result):
resolved_outputs = {
k: convert_multimedia_data_to_base64(v, with_type=True, dict_type=True)
for k, v in invoke_result.output.items()
}
return resolved_outputs
def _dump_invoke_result(self, invoke_result):
if self._dump_to:
invoke_result.output = persist_multimedia_data(
invoke_result.output, base_dir=self._dump_to, sub_dir=Path(".promptflow/output")
)
dump_flow_result(flow_folder=self._dump_to, flow_result=invoke_result, prefix=self._dump_file_prefix)
| promptflow/src/promptflow/promptflow/_sdk/_serving/flow_invoker.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/flow_invoker.py",
"repo_id": "promptflow",
"token_count": 3999
} | 14 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# this file is a middle layer between the local SDK and executor, it'll have some similar logic with cloud PFS.
import contextlib
import hashlib
import json
import os
import platform
import re
import subprocess
import tempfile
import time
from collections import defaultdict
from os import PathLike
from pathlib import Path
from time import sleep
from types import GeneratorType
from typing import Any, Dict, List
import psutil
import pydash
from dotenv import load_dotenv
from pydash import objects
from promptflow._constants import STREAMING_ANIMATION_TIME
from promptflow._sdk._constants import (
ALL_CONNECTION_TYPES,
DEFAULT_VAR_ID,
INPUTS,
NODE,
NODE_VARIANTS,
NODES,
SUPPORTED_CONNECTION_FIELDS,
USE_VARIANTS,
VARIANTS,
ConnectionFields,
)
from promptflow._sdk._errors import InvalidFlowError, RunOperationError
from promptflow._sdk._load_functions import load_flow
from promptflow._sdk._utils import (
_merge_local_code_and_additional_includes,
get_local_connections_from_executable,
get_used_connection_names_from_dict,
update_dict_value_with_connections,
)
from promptflow._sdk.entities._eager_flow import EagerFlow
from promptflow._sdk.entities._flow import Flow, ProtectedFlow
from promptflow._utils.context_utils import _change_working_dir
from promptflow._utils.flow_utils import dump_flow_dag, load_flow_dag
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.contracts.flow import Flow as ExecutableFlow
logger = get_cli_sdk_logger()
def overwrite_variant(flow_dag: dict, tuning_node: str = None, variant: str = None, drop_node_variants: bool = False):
# need to overwrite default variant if tuning node and variant not specified.
# check tuning_node & variant
node_name_2_node = {node["name"]: node for node in flow_dag[NODES]}
if tuning_node and tuning_node not in node_name_2_node:
raise InvalidFlowError(f"Node {tuning_node} not found in flow")
if tuning_node and variant:
try:
flow_dag[NODE_VARIANTS][tuning_node][VARIANTS][variant]
except KeyError as e:
raise InvalidFlowError(f"Variant {variant} not found for node {tuning_node}") from e
try:
node_variants = flow_dag.pop(NODE_VARIANTS, {}) if drop_node_variants else flow_dag.get(NODE_VARIANTS, {})
updated_nodes = []
for node in flow_dag.get(NODES, []):
if not node.get(USE_VARIANTS, False):
updated_nodes.append(node)
continue
# update variant
node_name = node["name"]
if node_name not in node_variants:
raise InvalidFlowError(f"No variant for the node {node_name}.")
variants_cfg = node_variants[node_name]
variant_id = variant if node_name == tuning_node else None
if not variant_id:
if DEFAULT_VAR_ID not in variants_cfg:
raise InvalidFlowError(f"Default variant id is not specified for {node_name}.")
variant_id = variants_cfg[DEFAULT_VAR_ID]
if variant_id not in variants_cfg.get(VARIANTS, {}):
raise InvalidFlowError(f"Cannot find the variant {variant_id} for {node_name}.")
variant_cfg = variants_cfg[VARIANTS][variant_id][NODE]
updated_nodes.append({"name": node_name, **variant_cfg})
flow_dag[NODES] = updated_nodes
except KeyError as e:
raise InvalidFlowError("Failed to overwrite tuning node with variant") from e
def overwrite_connections(flow_dag: dict, connections: dict, working_dir: PathLike):
if not connections:
return
if not isinstance(connections, dict):
raise InvalidFlowError(f"Invalid connections overwrite format: {connections}, only list is supported.")
# Load executable flow to check if connection is LLM connection
executable_flow = ExecutableFlow._from_dict(flow_dag=flow_dag, working_dir=Path(working_dir))
node_name_2_node = {node["name"]: node for node in flow_dag[NODES]}
for node_name, connection_dict in connections.items():
if node_name not in node_name_2_node:
raise InvalidFlowError(f"Node {node_name} not found in flow")
if not isinstance(connection_dict, dict):
raise InvalidFlowError(f"Invalid connection overwrite format: {connection_dict}, only dict is supported.")
node = node_name_2_node[node_name]
executable_node = executable_flow.get_node(node_name=node_name)
if executable_flow.is_llm_node(executable_node):
unsupported_keys = connection_dict.keys() - SUPPORTED_CONNECTION_FIELDS
if unsupported_keys:
raise InvalidFlowError(
f"Unsupported llm connection overwrite keys: {unsupported_keys},"
f" only {SUPPORTED_CONNECTION_FIELDS} are supported."
)
try:
connection = connection_dict.get(ConnectionFields.CONNECTION)
if connection:
node[ConnectionFields.CONNECTION] = connection
deploy_name = connection_dict.get(ConnectionFields.DEPLOYMENT_NAME)
if deploy_name:
node[INPUTS][ConnectionFields.DEPLOYMENT_NAME] = deploy_name
except KeyError as e:
raise InvalidFlowError(
f"Failed to overwrite llm node {node_name} with connections {connections}"
) from e
else:
connection_inputs = executable_flow.get_connection_input_names_for_node(node_name=node_name)
for c, v in connection_dict.items():
if c not in connection_inputs:
raise InvalidFlowError(f"Connection with name {c} not found in node {node_name}'s inputs")
node[INPUTS][c] = v
def overwrite_flow(flow_dag: dict, params_overrides: dict):
if not params_overrides:
return
# update flow dag & change nodes list to name: obj dict
flow_dag[NODES] = {node["name"]: node for node in flow_dag[NODES]}
# apply overrides on flow dag
for param, val in params_overrides.items():
objects.set_(flow_dag, param, val)
# revert nodes to list
flow_dag[NODES] = list(flow_dag[NODES].values())
def remove_additional_includes(flow_path: Path):
flow_path, flow_dag = load_flow_dag(flow_path=flow_path)
flow_dag.pop("additional_includes", None)
dump_flow_dag(flow_dag, flow_path)
@contextlib.contextmanager
def variant_overwrite_context(
flow: Flow,
tuning_node: str = None,
variant: str = None,
connections: dict = None,
*,
overrides: dict = None,
drop_node_variants: bool = False,
):
"""Override variant and connections in the flow."""
flow_dag = flow._data
flow_dir_path = Path(flow.code)
if flow.additional_includes:
# Merge the flow folder and additional includes to temp folder for both eager flow & dag flow.
with _merge_local_code_and_additional_includes(code_path=flow_dir_path) as temp_dir:
if not isinstance(flow, EagerFlow):
# always overwrite variant since we need to overwrite default variant if not specified.
overwrite_variant(flow_dag, tuning_node, variant, drop_node_variants=drop_node_variants)
overwrite_connections(flow_dag, connections, working_dir=flow_dir_path)
overwrite_flow(flow_dag, overrides)
flow_dag.pop("additional_includes", None)
dump_flow_dag(flow_dag, Path(temp_dir))
flow = load_flow(temp_dir)
yield flow
elif isinstance(flow, EagerFlow):
# eager flow don't support overwrite variant
yield flow
else:
# Generate a flow, the code path points to the original flow folder,
# the dag path points to the temp dag file after overwriting variant.
with tempfile.TemporaryDirectory() as temp_dir:
overwrite_variant(flow_dag, tuning_node, variant, drop_node_variants=drop_node_variants)
overwrite_connections(flow_dag, connections, working_dir=flow_dir_path)
overwrite_flow(flow_dag, overrides)
flow_path = dump_flow_dag(flow_dag, Path(temp_dir))
flow = ProtectedFlow(code=flow_dir_path, path=flow_path, dag=flow_dag)
yield flow
class SubmitterHelper:
@classmethod
def init_env(cls, environment_variables):
# TODO: remove when executor supports env vars in request
if isinstance(environment_variables, dict):
os.environ.update(environment_variables)
elif isinstance(environment_variables, (str, PathLike, Path)):
load_dotenv(environment_variables)
@staticmethod
def resolve_connections(flow: Flow, client=None, connections_to_ignore=None) -> dict:
# TODO 2856400: use resolve_used_connections instead of this function to avoid using executable in control-plane
from promptflow._sdk.entities._eager_flow import EagerFlow
from .._pf_client import PFClient
if isinstance(flow, EagerFlow):
# TODO(2898247): support prompt flow management connection for eager flow
return {}
client = client or PFClient()
with _change_working_dir(flow.code):
executable = ExecutableFlow.from_yaml(flow_file=flow.path, working_dir=flow.code)
executable.name = str(Path(flow.code).stem)
return get_local_connections_from_executable(
executable=executable, client=client, connections_to_ignore=connections_to_ignore
)
@staticmethod
def resolve_used_connections(flow: ProtectedFlow, tools_meta: dict, client, connections_to_ignore=None) -> dict:
from .._pf_client import PFClient
client = client or PFClient()
connection_names = SubmitterHelper.get_used_connection_names(tools_meta=tools_meta, flow_dag=flow._data)
connections_to_ignore = connections_to_ignore or []
result = {}
for n in connection_names:
if n not in connections_to_ignore:
conn = client.connections.get(name=n, with_secrets=True)
result[n] = conn._to_execution_connection_dict()
return result
@staticmethod
def get_used_connection_names(tools_meta: dict, flow_dag: dict):
# TODO: handle code tool meta for python
connection_inputs = defaultdict(set)
for package_id, package_meta in tools_meta.get("package", {}).items():
for tool_input_key, tool_input_meta in package_meta.get("inputs", {}).items():
if ALL_CONNECTION_TYPES.intersection(set(tool_input_meta.get("type"))):
connection_inputs[package_id].add(tool_input_key)
connection_names = set()
# TODO: we assume that all variants are resolved here
# TODO: only literal connection inputs are supported
# TODO: check whether we should put this logic in executor as seems it's not possible to avoid touching
# information for executable
for node in flow_dag.get("nodes", []):
package_id = pydash.get(node, "source.tool")
if package_id in connection_inputs:
for connection_input in connection_inputs[package_id]:
connection_name = pydash.get(node, f"inputs.{connection_input}")
if connection_name and not re.match(r"\${.*}", connection_name):
connection_names.add(connection_name)
return list(connection_names)
@classmethod
def load_and_resolve_environment_variables(cls, flow: Flow, environment_variable_overrides: dict, client=None):
environment_variable_overrides = ExecutableFlow.load_env_variables(
flow_file=flow.path, working_dir=flow.code, environment_variables_overrides=environment_variable_overrides
)
cls.resolve_environment_variables(environment_variable_overrides, client)
return environment_variable_overrides
@classmethod
def resolve_environment_variables(cls, environment_variables: dict, client=None):
from .._pf_client import PFClient
client = client or PFClient()
if not environment_variables:
return None
connection_names = get_used_connection_names_from_dict(environment_variables)
logger.debug("Used connection names: %s", connection_names)
connections = cls.resolve_connection_names(connection_names=connection_names, client=client)
update_dict_value_with_connections(built_connections=connections, connection_dict=environment_variables)
@staticmethod
def resolve_connection_names(connection_names, client, raise_error=False):
result = {}
for n in connection_names:
try:
conn = client.connections.get(name=n, with_secrets=True)
result[n] = conn._to_execution_connection_dict()
except Exception as e:
if raise_error:
raise e
return result
def show_node_log_and_output(node_run_infos, show_node_output, generator_record):
"""Show stdout and output of nodes."""
from colorama import Fore
for node_name, node_result in node_run_infos.items():
# Prefix of node stdout is "%Y-%m-%dT%H:%M:%S%z"
pattern = r"\[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\+\d{4}\] "
if node_result.logs:
node_logs = re.sub(pattern, "", node_result.logs["stdout"])
if node_logs:
for log in node_logs.rstrip("\n").split("\n"):
print(f"{Fore.LIGHTBLUE_EX}[{node_name}]:", end=" ")
print(log)
if show_node_output:
print(f"{Fore.CYAN}{node_name}: ", end="")
# TODO executor return a type string of generator
node_output = node_result.output
if isinstance(node_result.output, GeneratorType):
node_output = "".join(
resolve_generator_output_with_cache(
node_output, generator_record, generator_key=f"nodes.{node_name}.output"
)
)
print(f"{Fore.LIGHTWHITE_EX}{node_output}")
def print_chat_output(output, generator_record, *, generator_key: str):
if isinstance(output, GeneratorType):
for event in resolve_generator_output_with_cache(output, generator_record, generator_key=generator_key):
print(event, end="")
# For better animation effects
time.sleep(STREAMING_ANIMATION_TIME)
# Print a new line at the end of the response
print()
else:
print(output)
def resolve_generator_output_with_cache(
output: GeneratorType, generator_record: Dict[str, Any], *, generator_key: str
) -> List[str]:
"""Get the output of a generator. If the generator has been recorded, return the recorded result. Otherwise, record
the result and return it.
We use a separate generator_key instead of the output itself as the key in the generator_record in case the output
is not a valid dict key in some cases.
:param output: The generator to get the output from.
:type output: GeneratorType
:param generator_record: The record of the generator.
:type generator_record: dict
:param generator_key: The key of the generator in the record, need to be unique.
:type generator_key: str
:return: The output of the generator.
:rtype: str
"""
if isinstance(output, GeneratorType):
if generator_key in generator_record:
if hasattr(generator_record[generator_key], "items"):
output = iter(generator_record[generator_key].items)
else:
output = iter(generator_record[generator_key])
else:
if hasattr(output.gi_frame.f_locals, "proxy"):
proxy = output.gi_frame.f_locals["proxy"]
generator_record[generator_key] = proxy
else:
generator_record[generator_key] = list(output)
output = generator_record[generator_key]
return output
def resolve_generator(flow_result, generator_record):
# resolve generator in flow result
for k, v in flow_result.run_info.output.items():
if isinstance(v, GeneratorType):
flow_output = "".join(
resolve_generator_output_with_cache(v, generator_record, generator_key=f"run.outputs.{k}")
)
flow_result.run_info.output[k] = flow_output
flow_result.run_info.result[k] = flow_output
flow_result.output[k] = flow_output
# resolve generator in node outputs
for node_name, node in flow_result.node_run_infos.items():
if isinstance(node.output, GeneratorType):
node_output = "".join(
resolve_generator_output_with_cache(
node.output, generator_record, generator_key=f"nodes.{node_name}.output"
)
)
node.output = node_output
node.result = node_output
return flow_result
# region start experiment utils
def _start_process_in_background(args, executable_path=None):
if platform.system() == "Windows":
os.spawnve(os.P_DETACH, executable_path, args, os.environ)
else:
subprocess.Popen(" ".join(["nohup"] + args + ["&"]), shell=True, env=os.environ)
def _windows_stop_handler(experiment_name, post_process):
import win32pipe
# Create a named pipe to receive the cancel signal.
pipe_name = r"\\.\pipe\{}".format(experiment_name)
pipe = win32pipe.CreateNamedPipe(
pipe_name,
win32pipe.PIPE_ACCESS_DUPLEX,
win32pipe.PIPE_TYPE_MESSAGE | win32pipe.PIPE_WAIT,
1,
65536,
65536,
0,
None,
)
# Wait for connection to stop orchestrator
win32pipe.ConnectNamedPipe(pipe, None)
post_process()
def _calculate_snapshot(column_mapping, input_data, flow_path):
def calculate_files_content_hash(file_path):
file_content = {}
if not isinstance(file_path, (str, PathLike)) or not Path(file_path).exists():
return file_path
if Path(file_path).is_file():
with open(file_path, "r") as f:
file_content[file_path] = hashlib.md5(f.read().encode("utf8")).hexdigest()
else:
for root, dirs, files in os.walk(file_path):
for ignore_item in ["__pycache__"]:
if ignore_item in dirs:
dirs.remove(ignore_item)
for file in files:
with open(os.path.join(root, file), "r") as f:
relative_path = (Path(root) / file).relative_to(Path(file_path)).as_posix()
try:
file_content[relative_path] = hashlib.md5(f.read().encode("utf8")).hexdigest()
except Exception as e:
raise e
return hashlib.md5(json.dumps(file_content, sort_keys=True).encode("utf-8")).hexdigest()
snapshot_content = {
"column_mapping": column_mapping,
"inputs": {key: calculate_files_content_hash(value) for key, value in input_data.items()},
"code": calculate_files_content_hash(flow_path),
}
return hashlib.md5(json.dumps(snapshot_content, sort_keys=True).encode("utf-8")).hexdigest()
def _stop_orchestrator_process(orchestrator):
try:
if platform.system() == "Windows":
import win32file
# Connect to named pipe to stop the orchestrator process.
win32file.CreateFile(
r"\\.\pipe\{}".format(orchestrator.experiment_name),
win32file.GENERIC_READ | win32file.GENERIC_WRITE,
0,
None,
win32file.OPEN_EXISTING,
0,
None,
)
else:
# Send terminate signal to orchestrator process.
process = psutil.Process(orchestrator.pid)
process.terminate()
except psutil.NoSuchProcess:
logger.debug("Experiment orchestrator process terminates abnormally.")
return
except Exception as e:
raise RunOperationError(
message=f"Experiment stopped failed with {e}",
)
# Wait for status updated
try:
while True:
psutil.Process(orchestrator.pid)
sleep(1)
except psutil.NoSuchProcess:
logger.debug("Experiment status has been updated.")
# endregion
| promptflow/src/promptflow/promptflow/_sdk/_submitter/utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_submitter/utils.py",
"repo_id": "promptflow",
"token_count": 8807
} | 15 |
#!/bin/bash
# stop services created by runsv and propagate SIGINT, SIGTERM to child jobs
sv_stop() {
echo "$(date -uIns) - Stopping all runsv services"
for s in $(ls -d /var/runit/*); do
sv stop $s
done
}
# register SIGINT, SIGTERM handler
trap sv_stop SIGINT SIGTERM
# start services in background and wait all child jobs
runsvdir /var/runit &
wait
| promptflow/src/promptflow/promptflow/_sdk/data/docker/start.sh.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/docker/start.sh.jinja2",
"repo_id": "promptflow",
"token_count": 135
} | 16 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import abc
import importlib
import json
import types
from os import PathLike
from pathlib import Path
from typing import Dict, List, Union
from promptflow._core.token_provider import AzureTokenProvider, TokenProviderABC
from promptflow._sdk._constants import (
BASE_PATH_CONTEXT_KEY,
PARAMS_OVERRIDE_KEY,
SCHEMA_KEYS_CONTEXT_CONFIG_KEY,
SCHEMA_KEYS_CONTEXT_SECRET_KEY,
SCRUBBED_VALUE,
SCRUBBED_VALUE_NO_CHANGE,
SCRUBBED_VALUE_USER_INPUT,
ConfigValueType,
ConnectionType,
CustomStrongTypeConnectionConfigs,
)
from promptflow._sdk._errors import UnsecureConnectionError, SDKError
from promptflow._sdk._orm.connection import Connection as ORMConnection
from promptflow._sdk._utils import (
decrypt_secret_value,
encrypt_secret_value,
find_type_in_override,
in_jupyter_notebook,
print_yellow_warning,
snake_to_camel,
)
from promptflow._sdk.entities._yaml_translatable import YAMLTranslatableMixin
from promptflow._sdk.schemas._connection import (
AzureContentSafetyConnectionSchema,
AzureOpenAIConnectionSchema,
CognitiveSearchConnectionSchema,
CustomConnectionSchema,
CustomStrongTypeConnectionSchema,
FormRecognizerConnectionSchema,
OpenAIConnectionSchema,
QdrantConnectionSchema,
SerpConnectionSchema,
WeaviateConnectionSchema,
)
from promptflow._utils.logger_utils import LoggerFactory
from promptflow.contracts.types import Secret
from promptflow.exceptions import ValidationException, UserErrorException
logger = LoggerFactory.get_logger(name=__name__)
PROMPTFLOW_CONNECTIONS = "promptflow.connections"
class _Connection(YAMLTranslatableMixin):
"""A connection entity that stores the connection information.
:param name: Connection name
:type name: str
:param type: Possible values include: "OpenAI", "AzureOpenAI", "Custom".
:type type: str
:param module: The module of connection class, used for execution.
:type module: str
:param configs: The configs kv pairs.
:type configs: Dict[str, str]
:param secrets: The secrets kv pairs.
:type secrets: Dict[str, str]
"""
TYPE = ConnectionType._NOT_SET
def __init__(
self,
name: str = "default_connection",
module: str = "promptflow.connections",
configs: Dict[str, str] = None,
secrets: Dict[str, str] = None,
**kwargs,
):
self.name = name
self.type = self.TYPE
self.class_name = f"{self.TYPE.value}Connection" # The type in executor connection dict
self.configs = configs or {}
self.module = module
# Note the connection secrets value behaviors:
# --------------------------------------------------------------------------------
# | secret value | CLI create | CLI update | SDK create_or_update |
# --------------------------------------------------------------------------------
# | empty or all "*" | prompt input | use existing values | use existing values |
# | <no-change> | prompt input | use existing values | use existing values |
# | <user-input> | prompt input | prompt input | raise error |
# --------------------------------------------------------------------------------
self.secrets = secrets or {}
self._secrets = {**self.secrets} # Un-scrubbed secrets
self.expiry_time = kwargs.get("expiry_time", None)
self.created_date = kwargs.get("created_date", None)
self.last_modified_date = kwargs.get("last_modified_date", None)
# Conditional assignment to prevent entity bloat when unused.
print_as_yaml = kwargs.pop("print_as_yaml", in_jupyter_notebook())
if print_as_yaml:
self.print_as_yaml = True
@classmethod
def _casting_type(cls, typ):
type_dict = {
"azure_open_ai": ConnectionType.AZURE_OPEN_AI.value,
"open_ai": ConnectionType.OPEN_AI.value,
}
if typ in type_dict:
return type_dict.get(typ)
return snake_to_camel(typ)
def keys(self) -> List:
"""Return keys of the connection properties."""
return list(self.configs.keys()) + list(self.secrets.keys())
def __getitem__(self, item):
# Note: This is added to allow usage **connection().
if item in self.secrets:
return self.secrets[item]
if item in self.configs:
return self.configs[item]
# raise UserErrorException(error=KeyError(f"Key {item!r} not found in connection {self.name!r}."))
# Cant't raise UserErrorException due to the code exit(1) of promptflow._cli._utils.py line 368.
raise KeyError(f"Key {item!r} not found in connection {self.name!r}.")
@classmethod
def _is_scrubbed_value(cls, value):
"""For scrubbed value, cli will get original for update, and prompt user to input for create."""
if value is None or not value:
return True
if all([v == "*" for v in value]):
return True
return value == SCRUBBED_VALUE_NO_CHANGE
@classmethod
def _is_user_input_value(cls, value):
"""The value will prompt user to input in cli for both create and update."""
return value == SCRUBBED_VALUE_USER_INPUT
def _validate_and_encrypt_secrets(self):
encrypt_secrets = {}
invalid_secrets = []
for k, v in self.secrets.items():
# In sdk experience, if v is not scrubbed, use it.
# If v is scrubbed, try to use the value in _secrets.
# If v is <user-input>, raise error.
if self._is_scrubbed_value(v):
# Try to get the value not scrubbed.
v = self._secrets.get(k)
if self._is_scrubbed_value(v) or self._is_user_input_value(v):
# Can't find the original value or is <user-input>, raise error.
invalid_secrets.append(k)
continue
encrypt_secrets[k] = encrypt_secret_value(v)
if invalid_secrets:
raise ValidationException(
f"Connection {self.name!r} secrets {invalid_secrets} value invalid, please fill them."
)
return encrypt_secrets
@classmethod
def _load_from_dict(cls, data: Dict, context: Dict, additional_message: str = None, **kwargs):
schema_cls = cls._get_schema_cls()
try:
loaded_data = schema_cls(context=context).load(data, **kwargs)
except Exception as e:
raise SDKError(f"Load connection failed with {str(e)}. f{(additional_message or '')}.")
return cls(base_path=context[BASE_PATH_CONTEXT_KEY], **loaded_data)
def _to_dict(self) -> Dict:
schema_cls = self._get_schema_cls()
return schema_cls(context={BASE_PATH_CONTEXT_KEY: "./"}).dump(self)
@classmethod
# pylint: disable=unused-argument
def _resolve_cls_and_type(cls, data, params_override=None):
type_in_override = find_type_in_override(params_override)
type_str = type_in_override or data.get("type")
if type_str is None:
raise ValidationException("type is required for connection.")
type_str = cls._casting_type(type_str)
type_cls = _supported_types.get(type_str)
if type_cls is None:
raise ValidationException(
f"connection_type {type_str!r} is not supported. Supported types are: {list(_supported_types.keys())}"
)
return type_cls, type_str
@abc.abstractmethod
def _to_orm_object(self) -> ORMConnection:
pass
@classmethod
def _from_mt_rest_object(cls, mt_rest_obj) -> "_Connection":
type_cls, _ = cls._resolve_cls_and_type(data={"type": mt_rest_obj.connection_type})
obj = type_cls._from_mt_rest_object(mt_rest_obj)
return obj
@classmethod
def _from_orm_object_with_secrets(cls, orm_object: ORMConnection):
# !!! Attention !!!: Do not use this function to user facing api, use _from_orm_object to remove secrets.
type_cls, _ = cls._resolve_cls_and_type(data={"type": orm_object.connectionType})
obj = type_cls._from_orm_object_with_secrets(orm_object)
return obj
@classmethod
def _from_orm_object(cls, orm_object: ORMConnection):
"""This function will create a connection object then scrub secrets."""
type_cls, _ = cls._resolve_cls_and_type(data={"type": orm_object.connectionType})
obj = type_cls._from_orm_object_with_secrets(orm_object)
# Note: we may can't get secret keys for custom connection from MT
obj.secrets = {k: SCRUBBED_VALUE for k in obj.secrets}
return obj
@classmethod
def _load(
cls,
data: Dict = None,
yaml_path: Union[PathLike, str] = None,
params_override: list = None,
**kwargs,
) -> "_Connection":
"""Load a job object from a yaml file.
:param cls: Indicates that this is a class method.
:type cls: class
:param data: Data Dictionary, defaults to None
:type data: Dict, optional
:param yaml_path: YAML Path, defaults to None
:type yaml_path: Union[PathLike, str], optional
:param params_override: Fields to overwrite on top of the yaml file.
Format is [{"field1": "value1"}, {"field2": "value2"}], defaults to None
:type params_override: List[Dict], optional
:param kwargs: A dictionary of additional configuration parameters.
:type kwargs: dict
:raises Exception: An exception
:return: Loaded job object.
:rtype: Job
"""
data = data or {}
params_override = params_override or []
context = {
BASE_PATH_CONTEXT_KEY: Path(yaml_path).parent if yaml_path else Path("../../azure/_entities/"),
PARAMS_OVERRIDE_KEY: params_override,
}
connection_type, type_str = cls._resolve_cls_and_type(data, params_override)
connection = connection_type._load_from_dict(
data=data,
context=context,
additional_message=f"If you are trying to configure a job that is not of type {type_str}, please specify "
f"the correct connection type in the 'type' property.",
**kwargs,
)
return connection
def _to_execution_connection_dict(self) -> dict:
value = {**self.configs, **self.secrets}
secret_keys = list(self.secrets.keys())
return {
"type": self.class_name, # Required class name for connection in executor
"module": self.module,
"value": {k: v for k, v in value.items() if v is not None}, # Filter None value out
"secret_keys": secret_keys,
}
@classmethod
def _from_execution_connection_dict(cls, name, data) -> "_Connection":
type_cls, _ = cls._resolve_cls_and_type(data={"type": data.get("type")[: -len("Connection")]})
value_dict = data.get("value", {})
if type_cls == CustomConnection:
secrets = {k: v for k, v in value_dict.items() if k in data.get("secret_keys", [])}
configs = {k: v for k, v in value_dict.items() if k not in secrets}
return CustomConnection(name=name, configs=configs, secrets=secrets)
return type_cls(name=name, **value_dict)
def _get_scrubbed_secrets(self):
"""Return the scrubbed secrets of connection."""
return {key: val for key, val in self.secrets.items() if self._is_scrubbed_value(val)}
class _StrongTypeConnection(_Connection):
def _to_orm_object(self):
# Both keys & secrets will be stored in configs for strong type connection.
secrets = self._validate_and_encrypt_secrets()
return ORMConnection(
connectionName=self.name,
connectionType=self.type.value,
configs=json.dumps({**self.configs, **secrets}),
customConfigs="{}",
expiryTime=self.expiry_time,
createdDate=self.created_date,
lastModifiedDate=self.last_modified_date,
)
@classmethod
def _from_orm_object_with_secrets(cls, orm_object: ORMConnection):
# !!! Attention !!!: Do not use this function to user facing api, use _from_orm_object to remove secrets.
# Both keys & secrets will be stored in configs for strong type connection.
type_cls, _ = cls._resolve_cls_and_type(data={"type": orm_object.connectionType})
obj = type_cls(
name=orm_object.connectionName,
expiry_time=orm_object.expiryTime,
created_date=orm_object.createdDate,
last_modified_date=orm_object.lastModifiedDate,
**json.loads(orm_object.configs),
)
obj.secrets = {k: decrypt_secret_value(obj.name, v) for k, v in obj.secrets.items()}
obj._secrets = {**obj.secrets}
return obj
@classmethod
def _from_mt_rest_object(cls, mt_rest_obj):
type_cls, _ = cls._resolve_cls_and_type(data={"type": mt_rest_obj.connection_type})
configs = mt_rest_obj.configs or {}
# For not ARM strong type connection, e.g. OpenAI, api_key will not be returned, but is required argument.
# For ARM strong type connection, api_key will be None and missing when conn._to_dict(), so set a scrubbed one.
configs.update({"api_key": SCRUBBED_VALUE})
obj = type_cls(
name=mt_rest_obj.connection_name,
expiry_time=mt_rest_obj.expiry_time,
created_date=mt_rest_obj.created_date,
last_modified_date=mt_rest_obj.last_modified_date,
**configs,
)
return obj
@property
def api_key(self):
"""Return the api key."""
return self.secrets.get("api_key", SCRUBBED_VALUE)
@api_key.setter
def api_key(self, value):
"""Set the api key."""
self.secrets["api_key"] = value
class AzureOpenAIConnection(_StrongTypeConnection):
"""Azure Open AI connection.
:param api_key: The api key.
:type api_key: str
:param api_base: The api base.
:type api_base: str
:param api_type: The api type, default "azure".
:type api_type: str
:param api_version: The api version, default "2023-07-01-preview".
:type api_version: str
:param token_provider: The token provider.
:type token_provider: promptflow._core.token_provider.TokenProviderABC
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.AZURE_OPEN_AI
def __init__(
self,
api_key: str,
api_base: str,
api_type: str = "azure",
api_version: str = "2023-07-01-preview",
token_provider: TokenProviderABC = None,
**kwargs,
):
configs = {"api_base": api_base, "api_type": api_type, "api_version": api_version}
secrets = {"api_key": api_key}
self._token_provider = token_provider
super().__init__(configs=configs, secrets=secrets, **kwargs)
@classmethod
def _get_schema_cls(cls):
return AzureOpenAIConnectionSchema
@property
def api_base(self):
"""Return the connection api base."""
return self.configs.get("api_base")
@api_base.setter
def api_base(self, value):
"""Set the connection api base."""
self.configs["api_base"] = value
@property
def api_type(self):
"""Return the connection api type."""
return self.configs.get("api_type")
@api_type.setter
def api_type(self, value):
"""Set the connection api type."""
self.configs["api_type"] = value
@property
def api_version(self):
"""Return the connection api version."""
return self.configs.get("api_version")
@api_version.setter
def api_version(self, value):
"""Set the connection api version."""
self.configs["api_version"] = value
def get_token(self):
"""Return the connection token."""
if not self._token_provider:
self._token_provider = AzureTokenProvider()
return self._token_provider.get_token()
class OpenAIConnection(_StrongTypeConnection):
"""Open AI connection.
:param api_key: The api key.
:type api_key: str
:param organization: Optional. The unique identifier for your organization which can be used in API requests.
:type organization: str
:param base_url: Optional. Specify when use customized api base, leave None to use open ai default api base.
:type base_url: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.OPEN_AI
def __init__(self, api_key: str, organization: str = None, base_url=None, **kwargs):
if base_url == "":
# Keep empty as None to avoid disturbing openai pick the default api base.
base_url = None
configs = {"organization": organization, "base_url": base_url}
secrets = {"api_key": api_key}
super().__init__(configs=configs, secrets=secrets, **kwargs)
@classmethod
def _get_schema_cls(cls):
return OpenAIConnectionSchema
@property
def organization(self):
"""Return the connection organization."""
return self.configs.get("organization")
@organization.setter
def organization(self, value):
"""Set the connection organization."""
self.configs["organization"] = value
@property
def base_url(self):
"""Return the connection api base."""
return self.configs.get("base_url")
@base_url.setter
def base_url(self, value):
"""Set the connection api base."""
self.configs["base_url"] = value
class SerpConnection(_StrongTypeConnection):
"""Serp connection.
:param api_key: The api key.
:type api_key: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.SERP
def __init__(self, api_key: str, **kwargs):
secrets = {"api_key": api_key}
super().__init__(secrets=secrets, **kwargs)
@classmethod
def _get_schema_cls(cls):
return SerpConnectionSchema
class _EmbeddingStoreConnection(_StrongTypeConnection):
TYPE = ConnectionType._NOT_SET
def __init__(self, api_key: str, api_base: str, **kwargs):
configs = {"api_base": api_base}
secrets = {"api_key": api_key}
super().__init__(module="promptflow_vectordb.connections", configs=configs, secrets=secrets, **kwargs)
@property
def api_base(self):
return self.configs.get("api_base")
@api_base.setter
def api_base(self, value):
self.configs["api_base"] = value
class QdrantConnection(_EmbeddingStoreConnection):
"""Qdrant connection.
:param api_key: The api key.
:type api_key: str
:param api_base: The api base.
:type api_base: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.QDRANT
@classmethod
def _get_schema_cls(cls):
return QdrantConnectionSchema
class WeaviateConnection(_EmbeddingStoreConnection):
"""Weaviate connection.
:param api_key: The api key.
:type api_key: str
:param api_base: The api base.
:type api_base: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.WEAVIATE
@classmethod
def _get_schema_cls(cls):
return WeaviateConnectionSchema
class CognitiveSearchConnection(_StrongTypeConnection):
"""Cognitive Search connection.
:param api_key: The api key.
:type api_key: str
:param api_base: The api base.
:type api_base: str
:param api_version: The api version, default "2023-07-01-Preview".
:type api_version: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.COGNITIVE_SEARCH
def __init__(self, api_key: str, api_base: str, api_version: str = "2023-07-01-Preview", **kwargs):
configs = {"api_base": api_base, "api_version": api_version}
secrets = {"api_key": api_key}
super().__init__(configs=configs, secrets=secrets, **kwargs)
@classmethod
def _get_schema_cls(cls):
return CognitiveSearchConnectionSchema
@property
def api_base(self):
"""Return the connection api base."""
return self.configs.get("api_base")
@api_base.setter
def api_base(self, value):
"""Set the connection api base."""
self.configs["api_base"] = value
@property
def api_version(self):
"""Return the connection api version."""
return self.configs.get("api_version")
@api_version.setter
def api_version(self, value):
"""Set the connection api version."""
self.configs["api_version"] = value
class AzureContentSafetyConnection(_StrongTypeConnection):
"""Azure Content Safety connection.
:param api_key: The api key.
:type api_key: str
:param endpoint: The api endpoint.
:type endpoint: str
:param api_version: The api version, default "2023-04-30-preview".
:type api_version: str
:param api_type: The api type, default "Content Safety".
:type api_type: str
:param name: Connection name.
:type name: str
"""
TYPE = ConnectionType.AZURE_CONTENT_SAFETY
def __init__(
self,
api_key: str,
endpoint: str,
api_version: str = "2023-10-01",
api_type: str = "Content Safety",
**kwargs,
):
configs = {"endpoint": endpoint, "api_version": api_version, "api_type": api_type}
secrets = {"api_key": api_key}
super().__init__(configs=configs, secrets=secrets, **kwargs)
@classmethod
def _get_schema_cls(cls):
return AzureContentSafetyConnectionSchema
@property
def endpoint(self):
"""Return the connection endpoint."""
return self.configs.get("endpoint")
@endpoint.setter
def endpoint(self, value):
"""Set the connection endpoint."""
self.configs["endpoint"] = value
@property
def api_version(self):
"""Return the connection api version."""
return self.configs.get("api_version")
@api_version.setter
def api_version(self, value):
"""Set the connection api version."""
self.configs["api_version"] = value
@property
def api_type(self):
"""Return the connection api type."""
return self.configs.get("api_type")
@api_type.setter
def api_type(self, value):
"""Set the connection api type."""
self.configs["api_type"] = value
class FormRecognizerConnection(AzureContentSafetyConnection):
"""Form Recognizer connection.
:param api_key: The api key.
:type api_key: str
:param endpoint: The api endpoint.
:type endpoint: str
:param api_version: The api version, default "2023-07-31".
:type api_version: str
:param api_type: The api type, default "Form Recognizer".
:type api_type: str
:param name: Connection name.
:type name: str
"""
# Note: FormRecognizer and ContentSafety are using CognitiveService type in ARM, so keys are the same.
TYPE = ConnectionType.FORM_RECOGNIZER
def __init__(
self, api_key: str, endpoint: str, api_version: str = "2023-07-31", api_type: str = "Form Recognizer", **kwargs
):
super().__init__(api_key=api_key, endpoint=endpoint, api_version=api_version, api_type=api_type, **kwargs)
@classmethod
def _get_schema_cls(cls):
return FormRecognizerConnectionSchema
class CustomStrongTypeConnection(_Connection):
"""Custom strong type connection.
.. note::
This connection type should not be used directly. Below is an example of how to use CustomStrongTypeConnection:
.. code-block:: python
class MyCustomConnection(CustomStrongTypeConnection):
api_key: Secret
api_base: str
:param configs: The configs kv pairs.
:type configs: Dict[str, str]
:param secrets: The secrets kv pairs.
:type secrets: Dict[str, str]
:param name: Connection name
:type name: str
"""
def __init__(
self,
secrets: Dict[str, str],
configs: Dict[str, str] = None,
**kwargs,
):
# There are two cases to init a Custom strong type connection:
# 1. The connection is created through SDK PFClient, custom_type and custom_module are not in the kwargs.
# 2. The connection is loaded from template file, custom_type and custom_module are in the kwargs.
custom_type = kwargs.get(CustomStrongTypeConnectionConfigs.TYPE, None)
custom_module = kwargs.get(CustomStrongTypeConnectionConfigs.MODULE, None)
if custom_type:
configs.update({CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY: custom_type})
if custom_module:
configs.update({CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY: custom_module})
self.kwargs = kwargs
super().__init__(configs=configs, secrets=secrets, **kwargs)
self.module = kwargs.get("module", self.__class__.__module__)
self.custom_type = custom_type or self.__class__.__name__
self.package = kwargs.get(CustomStrongTypeConnectionConfigs.PACKAGE, None)
self.package_version = kwargs.get(CustomStrongTypeConnectionConfigs.PACKAGE_VERSION, None)
def __getattribute__(self, item):
# Note: The reason to overwrite __getattribute__ instead of __getattr__ is as follows:
# Custom strong type connection is written this way:
# class MyCustomConnection(CustomStrongTypeConnection):
# api_key: Secret
# api_base: str = "This is a default value"
# api_base has a default value, my_custom_connection_instance.api_base would not trigger __getattr__.
# The default value will be returned directly instead of the real value in configs.
annotations = getattr(super().__getattribute__("__class__"), "__annotations__", {})
if item in annotations:
if annotations[item] == Secret:
return self.secrets[item]
else:
return self.configs[item]
return super().__getattribute__(item)
def __setattr__(self, key, value):
annotations = getattr(super().__getattribute__("__class__"), "__annotations__", {})
if key in annotations:
if annotations[key] == Secret:
self.secrets[key] = value
else:
self.configs[key] = value
return super().__setattr__(key, value)
def _to_orm_object(self) -> ORMConnection:
custom_connection = self._convert_to_custom()
return custom_connection._to_orm_object()
def _convert_to_custom(self):
# update configs
self.configs.update({CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY: self.custom_type})
self.configs.update({CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY: self.module})
if self.package and self.package_version:
self.configs.update({CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_KEY: self.package})
self.configs.update(
{CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_VERSION_KEY: self.package_version}
)
custom_connection = CustomConnection(configs=self.configs, secrets=self.secrets, **self.kwargs)
return custom_connection
@classmethod
def _get_custom_keys(cls, data: Dict):
# The data could be either from yaml or from DB.
# If from yaml, 'custom_type' and 'module' are outside the configs of data.
# If from DB, 'custom_type' and 'module' are within the configs of data.
if not data.get(CustomStrongTypeConnectionConfigs.TYPE) or not data.get(
CustomStrongTypeConnectionConfigs.MODULE
):
if (
not data["configs"][CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY]
or not data["configs"][CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY]
):
error = ValueError("custom_type and module are required for custom strong type connections.")
raise UserErrorException(message=str(error), error=error)
else:
m = data["configs"][CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY]
custom_cls = data["configs"][CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY]
else:
m = data[CustomStrongTypeConnectionConfigs.MODULE]
custom_cls = data[CustomStrongTypeConnectionConfigs.TYPE]
try:
module = importlib.import_module(m)
cls = getattr(module, custom_cls)
except ImportError:
error = ValueError(
f"Can't find module {m} in current environment. Please check the module is correctly configured."
)
raise UserErrorException(message=str(error), error=error)
except AttributeError:
error = ValueError(
f"Can't find class {custom_cls} in module {m}. "
f"Please check the custom_type is correctly configured."
)
raise UserErrorException(message=str(error), error=error)
schema_configs = {}
schema_secrets = {}
for k, v in cls.__annotations__.items():
if v == Secret:
schema_secrets[k] = v
else:
schema_configs[k] = v
return schema_configs, schema_secrets
@classmethod
def _get_schema_cls(cls):
return CustomStrongTypeConnectionSchema
@classmethod
def _load_from_dict(cls, data: Dict, context: Dict, additional_message: str = None, **kwargs):
schema_config_keys, schema_secret_keys = cls._get_custom_keys(data)
context[SCHEMA_KEYS_CONTEXT_CONFIG_KEY] = schema_config_keys
context[SCHEMA_KEYS_CONTEXT_SECRET_KEY] = schema_secret_keys
return (super()._load_from_dict(data, context, additional_message, **kwargs))._convert_to_custom()
class CustomConnection(_Connection):
"""Custom connection.
:param configs: The configs kv pairs.
:type configs: Dict[str, str]
:param secrets: The secrets kv pairs.
:type secrets: Dict[str, str]
:param name: Connection name
:type name: str
"""
TYPE = ConnectionType.CUSTOM
def __init__(
self,
secrets: Dict[str, str],
configs: Dict[str, str] = None,
**kwargs,
):
super().__init__(secrets=secrets, configs=configs, **kwargs)
@classmethod
def _get_schema_cls(cls):
return CustomConnectionSchema
@classmethod
def _load_from_dict(cls, data: Dict, context: Dict, additional_message: str = None, **kwargs):
# If context has params_override, it means the data would be updated by overridden values.
# Provide CustomStrongTypeConnectionSchema if 'custom_type' in params_override, else CustomConnectionSchema.
# For example:
# If a user updates an existing connection by re-upserting a connection file,
# the 'data' from DB is CustomConnection,
# but 'params_override' would actually contain custom strong type connection data.
is_custom_strong_type = data.get(CustomStrongTypeConnectionConfigs.TYPE) or any(
CustomStrongTypeConnectionConfigs.TYPE in d for d in context.get(PARAMS_OVERRIDE_KEY, [])
)
if is_custom_strong_type:
return CustomStrongTypeConnection._load_from_dict(data, context, additional_message, **kwargs)
return super()._load_from_dict(data, context, additional_message, **kwargs)
def __getattr__(self, item):
# Note: This is added for compatibility with promptflow.connections custom connection usage.
if item == "secrets":
# Usually obj.secrets will not reach here
# This is added to handle copy.deepcopy loop issue
return super().__getattribute__("secrets")
if item == "configs":
# Usually obj.configs will not reach here
# This is added to handle copy.deepcopy loop issue
return super().__getattribute__("configs")
if item in self.secrets:
logger.warning("Please use connection.secrets[key] to access secrets.")
return self.secrets[item]
if item in self.configs:
logger.warning("Please use connection.configs[key] to access configs.")
return self.configs[item]
return super().__getattribute__(item)
def is_secret(self, item):
"""Check if item is a secret."""
# Note: This is added for compatibility with promptflow.connections custom connection usage.
return item in self.secrets
def _to_orm_object(self):
# Both keys & secrets will be set in custom configs with value type specified for custom connection.
if not self.secrets:
error = ValueError(
"Secrets is required for custom connection, "
"please use CustomConnection(configs={key1: val1}, secrets={key2: val2}) "
"to initialize custom connection."
)
raise UserErrorException(message=str(error), error=error)
custom_configs = {
k: {"configValueType": ConfigValueType.STRING.value, "value": v} for k, v in self.configs.items()
}
encrypted_secrets = self._validate_and_encrypt_secrets()
custom_configs.update(
{k: {"configValueType": ConfigValueType.SECRET.value, "value": v} for k, v in encrypted_secrets.items()}
)
return ORMConnection(
connectionName=self.name,
connectionType=self.type.value,
configs="{}",
customConfigs=json.dumps(custom_configs),
expiryTime=self.expiry_time,
createdDate=self.created_date,
lastModifiedDate=self.last_modified_date,
)
@classmethod
def _from_orm_object_with_secrets(cls, orm_object: ORMConnection):
# !!! Attention !!!: Do not use this function to user facing api, use _from_orm_object to remove secrets.
# Both keys & secrets will be set in custom configs with value type specified for custom connection.
configs = {
k: v["value"]
for k, v in json.loads(orm_object.customConfigs).items()
if v["configValueType"] == ConfigValueType.STRING.value
}
secrets = {}
unsecure_connection = False
custom_type = None
for k, v in json.loads(orm_object.customConfigs).items():
if k == CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY:
custom_type = v["value"]
continue
if not v["configValueType"] == ConfigValueType.SECRET.value:
continue
try:
secrets[k] = decrypt_secret_value(orm_object.connectionName, v["value"])
except UnsecureConnectionError:
# This is to workaround old custom secrets that are not encrypted with Fernet.
unsecure_connection = True
secrets[k] = v["value"]
if unsecure_connection:
print_yellow_warning(
f"Warning: Please delete and re-create connection {orm_object.connectionName} "
"due to a security issue in the old sdk version."
)
return cls(
name=orm_object.connectionName,
configs=configs,
secrets=secrets,
custom_type=custom_type,
expiry_time=orm_object.expiryTime,
created_date=orm_object.createdDate,
last_modified_date=orm_object.lastModifiedDate,
)
@classmethod
def _from_mt_rest_object(cls, mt_rest_obj):
type_cls, _ = cls._resolve_cls_and_type(data={"type": mt_rest_obj.connection_type})
if not mt_rest_obj.custom_configs:
mt_rest_obj.custom_configs = {}
configs = {
k: v.value
for k, v in mt_rest_obj.custom_configs.items()
if v.config_value_type == ConfigValueType.STRING.value
}
secrets = {
k: v.value
for k, v in mt_rest_obj.custom_configs.items()
if v.config_value_type == ConfigValueType.SECRET.value
}
return cls(
name=mt_rest_obj.connection_name,
configs=configs,
secrets=secrets,
expiry_time=mt_rest_obj.expiry_time,
created_date=mt_rest_obj.created_date,
last_modified_date=mt_rest_obj.last_modified_date,
)
def _is_custom_strong_type(self):
return (
CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY in self.configs
and self.configs[CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY]
)
def _convert_to_custom_strong_type(self, module=None, to_class=None) -> CustomStrongTypeConnection:
# There are two scenarios to convert a custom connection to custom strong type connection:
# 1. The connection is created from a custom strong type connection template file.
# Custom type and module name are present in the configs.
# 2. The connection is created through SDK PFClient or a custom connection template file.
# Custom type and module name are not present in the configs. Module and class must be passed for conversion.
if to_class == self.__class__.__name__:
# No need to convert.
return self
import importlib
if (
CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY in self.configs
and CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY in self.configs
):
module_name = self.configs.get(CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY)
module = importlib.import_module(module_name)
custom_conn_name = self.configs.get(CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY)
elif isinstance(module, str) and isinstance(to_class, str):
module_name = module
module = importlib.import_module(module_name)
custom_conn_name = to_class
elif isinstance(module, types.ModuleType) and isinstance(to_class, str):
custom_conn_name = to_class
else:
error = ValueError(
f"Failed to convert to custom strong type connection because of "
f"invalid module or class: {module}, {to_class}"
)
raise UserErrorException(message=str(error), error=error)
custom_defined_connection_class = getattr(module, custom_conn_name)
connection_instance = custom_defined_connection_class(configs=self.configs, secrets=self.secrets)
return connection_instance
_supported_types = {
v.TYPE.value: v
for v in globals().values()
if isinstance(v, type) and issubclass(v, _Connection) and not v.__name__.startswith("_")
}
| promptflow/src/promptflow/promptflow/_sdk/entities/_connection.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/entities/_connection.py",
"repo_id": "promptflow",
"token_count": 16239
} | 17 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import re
from typing import List
from promptflow._sdk._constants import AZURE_WORKSPACE_REGEX_FORMAT, MAX_LIST_CLI_RESULTS
from promptflow._sdk._telemetry import ActivityType, WorkspaceTelemetryMixin, monitor_operation
from promptflow._sdk._utils import interactive_credential_disabled, is_from_cli, is_github_codespaces, print_red_error
from promptflow._sdk.entities._connection import _Connection
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.azure._utils.gerneral import get_arm_token
logger = get_cli_sdk_logger()
class LocalAzureConnectionOperations(WorkspaceTelemetryMixin):
def __init__(self, connection_provider, **kwargs):
self._subscription_id, self._resource_group, self._workspace_name = self._extract_workspace(connection_provider)
self._credential = kwargs.pop("credential", None) or self._get_credential()
super().__init__(
subscription_id=self._subscription_id,
resource_group_name=self._resource_group,
workspace_name=self._workspace_name,
**kwargs,
)
# Lazy init client as ml_client initialization require workspace read permission
self._pfazure_client = None
self._user_agent = kwargs.pop("user_agent", None)
@property
def _client(self):
if self._pfazure_client is None:
from promptflow.azure._pf_client import PFClient as PFAzureClient
self._pfazure_client = PFAzureClient(
# TODO: disable interactive credential when starting as a service
credential=self._credential,
subscription_id=self._subscription_id,
resource_group_name=self._resource_group,
workspace_name=self._workspace_name,
user_agent=self._user_agent,
)
return self._pfazure_client
@classmethod
def _get_credential(cls):
from azure.ai.ml._azure_environments import AzureEnvironments, EndpointURLS, _get_cloud, _get_default_cloud_name
from azure.identity import DefaultAzureCredential, DeviceCodeCredential
if is_from_cli():
try:
# Try getting token for cli without interactive login
cloud_name = _get_default_cloud_name()
if cloud_name != AzureEnvironments.ENV_DEFAULT:
cloud = _get_cloud(cloud=cloud_name)
authority = cloud.get(EndpointURLS.ACTIVE_DIRECTORY_ENDPOINT)
credential = DefaultAzureCredential(authority=authority, exclude_shared_token_cache_credential=True)
else:
credential = DefaultAzureCredential()
get_arm_token(credential=credential)
except Exception:
print_red_error(
"Please run 'az login' or 'az login --use-device-code' to set up account. "
"See https://docs.microsoft.com/cli/azure/authenticate-azure-cli for more details."
)
exit(1)
if interactive_credential_disabled():
return DefaultAzureCredential(exclude_interactive_browser_credential=True)
if is_github_codespaces():
# For code spaces, append device code credential as the fallback option.
credential = DefaultAzureCredential()
credential.credentials = (*credential.credentials, DeviceCodeCredential())
return credential
return DefaultAzureCredential(exclude_interactive_browser_credential=False)
@classmethod
def _extract_workspace(cls, connection_provider):
match = re.match(AZURE_WORKSPACE_REGEX_FORMAT, connection_provider)
if not match or len(match.groups()) != 5:
raise ValueError(
"Malformed connection provider string, expected azureml:/subscriptions/<subscription_id>/"
"resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/"
f"workspaces/<workspace_name>, got {connection_provider}"
)
subscription_id = match.group(1)
resource_group = match.group(3)
workspace_name = match.group(5)
return subscription_id, resource_group, workspace_name
@monitor_operation(activity_name="pf.connections.azure.list", activity_type=ActivityType.PUBLICAPI)
def list(
self,
max_results: int = MAX_LIST_CLI_RESULTS,
all_results: bool = False,
) -> List[_Connection]:
"""List connections.
:return: List of run objects.
:rtype: List[~promptflow.sdk.entities._connection._Connection]
"""
if max_results != MAX_LIST_CLI_RESULTS or all_results:
logger.warning(
"max_results and all_results are not supported for workspace connection and will be ignored."
)
return self._client._connections.list()
@monitor_operation(activity_name="pf.connections.azure.get", activity_type=ActivityType.PUBLICAPI)
def get(self, name: str, **kwargs) -> _Connection:
"""Get a connection entity.
:param name: Name of the connection.
:type name: str
:return: connection object retrieved from the database.
:rtype: ~promptflow.sdk.entities._connection._Connection
"""
with_secrets = kwargs.get("with_secrets", False)
if with_secrets:
# Do not use pfazure_client here as it requires workspace read permission
# Get secrets from arm only requires workspace listsecrets permission
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
return ArmConnectionOperations._direct_get(
name, self._subscription_id, self._resource_group, self._workspace_name, self._credential
)
return self._client._connections.get(name)
@monitor_operation(activity_name="pf.connections.azure.delete", activity_type=ActivityType.PUBLICAPI)
def delete(self, name: str) -> None:
"""Delete a connection entity.
:param name: Name of the connection.
:type name: str
"""
raise NotImplementedError(
"Delete workspace connection is not supported in promptflow, "
"please manage it in workspace portal, az ml cli or AzureML SDK."
)
@monitor_operation(activity_name="pf.connections.azure.create_or_update", activity_type=ActivityType.PUBLICAPI)
def create_or_update(self, connection: _Connection, **kwargs):
"""Create or update a connection.
:param connection: Run object to create or update.
:type connection: ~promptflow.sdk.entities._connection._Connection
"""
raise NotImplementedError(
"Create or update workspace connection is not supported in promptflow, "
"please manage it in workspace portal, az ml cli or AzureML SDK."
)
| promptflow/src/promptflow/promptflow/_sdk/operations/_local_azure_connection_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/_local_azure_connection_operations.py",
"repo_id": "promptflow",
"token_count": 2867
} | 18 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
from concurrent.futures import ThreadPoolExecutor
def _has_running_loop() -> bool:
"""Check if the current thread has a running event loop."""
# When using asyncio.get_running_loop(), a RuntimeError is raised if there is no running event loop.
# So, we use a try-catch block to determine whether there is currently an event loop in place.
#
# Note that this is the only way to check whether there is a running loop now, see:
# https://docs.python.org/3/library/asyncio-eventloop.html?highlight=get_running_loop#asyncio.get_running_loop
try:
asyncio.get_running_loop()
return True
except RuntimeError:
return False
def async_run_allowing_running_loop(async_func, *args, **kwargs):
"""Run an async function in a new thread, allowing the current thread to have a running event loop.
When run in an async environment (e.g., in a notebook), because each thread allows only one event
loop, using asyncio.run directly leads to a RuntimeError ("asyncio.run() cannot be called from a
running event loop").
To address this issue, we add a check for the event loop here. If the current thread already has an
event loop, we run _exec_batch in a new thread; otherwise, we run it in the current thread.
"""
if _has_running_loop():
with ThreadPoolExecutor(1) as executor:
return executor.submit(lambda: asyncio.run(async_func(*args, **kwargs))).result()
else:
return asyncio.run(async_func(*args, **kwargs))
| promptflow/src/promptflow/promptflow/_utils/async_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/async_utils.py",
"repo_id": "promptflow",
"token_count": 524
} | 19 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from copy import deepcopy
from promptflow._core.generator_proxy import GeneratorProxy
def _deep_copy_and_extract_items_from_generator_proxy(value: object) -> object:
"""Deep copy value, and if there is a GeneratorProxy, deepcopy the items from it.
:param value: Any object.
:type value: Object
:return: Deep copied value.
:rtype: Object
"""
if isinstance(value, list):
return [_deep_copy_and_extract_items_from_generator_proxy(v) for v in value]
elif isinstance(value, dict):
return {k: _deep_copy_and_extract_items_from_generator_proxy(v) for k, v in value.items()}
elif isinstance(value, GeneratorProxy):
return deepcopy(value.items)
return deepcopy(value)
| promptflow/src/promptflow/promptflow/_utils/run_tracker_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/run_tracker_utils.py",
"repo_id": "promptflow",
"token_count": 286
} | 20 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
from os import PathLike
from typing import Dict, List, Optional, Union
from azure.ai.ml import MLClient
from azure.core.credentials import TokenCredential
from promptflow._sdk._constants import MAX_SHOW_DETAILS_RESULTS
from promptflow._sdk._errors import RunOperationParameterError
from promptflow._sdk._user_agent import USER_AGENT
from promptflow._sdk._utils import ClientUserAgentUtil, setup_user_agent_to_operation_context
from promptflow._sdk.entities import Run
from promptflow.azure._restclient.service_caller_factory import _FlowServiceCallerFactory
from promptflow.azure.operations import RunOperations
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
from promptflow.azure.operations._connection_operations import ConnectionOperations
from promptflow.azure.operations._flow_operations import FlowOperations
from promptflow.azure.operations._trace_operations import TraceOperations
from promptflow.exceptions import UserErrorException
class PFClient:
"""A client class to interact with Promptflow service.
Use this client to manage promptflow resources, e.g. runs.
:param credential: Credential to use for authentication, optional
:type credential: ~azure.core.credentials.TokenCredential
:param subscription_id: Azure subscription ID, optional for registry assets only, optional
:type subscription_id: typing.Optional[str]
:param resource_group_name: Azure resource group, optional for registry assets only, optional
:type resource_group_name: typing.Optional[str]
:param workspace_name: Workspace to use in the client, optional for non workspace dependent operations only,
optional.
:type workspace_name: typing.Optional[str]
:param kwargs: A dictionary of additional configuration parameters.
:type kwargs: dict
"""
def __init__(
self,
credential: TokenCredential = None,
subscription_id: Optional[str] = None,
resource_group_name: Optional[str] = None,
workspace_name: Optional[str] = None,
**kwargs,
):
self._validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs)
# add user agent from kwargs if any
if isinstance(kwargs.get("user_agent", None), str):
ClientUserAgentUtil.append_user_agent(kwargs["user_agent"])
# append SDK ua to context
user_agent = setup_user_agent_to_operation_context(USER_AGENT)
kwargs.setdefault("user_agent", user_agent)
self._ml_client = kwargs.pop("ml_client", None) or MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
**kwargs,
)
try:
workspace = self._ml_client.workspaces.get(name=self._ml_client._operation_scope.workspace_name)
except Exception as e:
raise UserErrorException(message=str(e), error=e)
self._service_caller = _FlowServiceCallerFactory.get_instance(
workspace=workspace,
credential=self._ml_client._credential,
operation_scope=self._ml_client._operation_scope,
**kwargs,
)
self._flows = FlowOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._runs = RunOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
flow_operations=self._flows,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._connections = ConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
self._arm_connections = ArmConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
self._traces = TraceOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
service_caller=self._service_caller,
**kwargs,
)
@staticmethod
def _validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs):
"""Validate the config information in case wrong parameter name is passed into the constructor."""
sub_name, wrong_sub_name = "subscription_id", "subscription"
rg_name, wrong_rg_name = "resource_group_name", "resource_group"
ws_name, wrong_ws_name = "workspace_name", "workspace"
error_message = (
"You have passed in the wrong parameter name to initialize the PFClient, please use {0!r} instead of {1!r}."
)
if not subscription_id and kwargs.get(wrong_sub_name, None) is not None:
raise RunOperationParameterError(error_message.format(sub_name, wrong_sub_name))
if not resource_group_name and kwargs.get(wrong_rg_name, None) is not None:
raise RunOperationParameterError(error_message.format(rg_name, wrong_rg_name))
if not workspace_name and kwargs.get(wrong_ws_name, None) is not None:
raise RunOperationParameterError(error_message.format(ws_name, wrong_ws_name))
@property
def ml_client(self):
"""Return a client to interact with Azure ML services."""
return self._ml_client
@property
def runs(self):
"""Return the run operation object that can manage runs."""
return self._runs
@property
def flows(self):
"""Return the flow operation object that can manage flows."""
return self._flows
@classmethod
def from_config(
cls,
credential: TokenCredential,
*,
path: Optional[Union[os.PathLike, str]] = None,
file_name=None,
**kwargs,
) -> "PFClient":
"""Return a PFClient object connected to Azure Machine Learning workspace.
Reads workspace configuration from a file. Throws an exception if the config file can't be found.
The method provides a simple way to reuse the same workspace across multiple Python notebooks or projects.
Users can save the workspace Azure Resource Manager (ARM) properties using the
[workspace.write_config](https://aka.ms/ml-workspace-class) method,
and use this method to load the same workspace in different Python notebooks or projects without
retyping the workspace ARM properties.
:param credential: The credential object for the workspace.
:type credential: ~azure.core.credentials.TokenCredential
:param path: The path to the config file or starting directory to search.
The parameter defaults to starting the search in the current directory.
optional
:type path: typing.Union[os.PathLike, str]
:param file_name: Allows overriding the config file name to search for when path is a directory path.
(Default value = None)
:type file_name: str
"""
ml_client = MLClient.from_config(credential=credential, path=path, file_name=file_name, **kwargs)
return PFClient(
ml_client=ml_client,
**kwargs,
)
def run(
self,
flow: Union[str, PathLike],
*,
data: Union[str, PathLike] = None,
run: Union[str, Run] = None,
column_mapping: dict = None,
variant: str = None,
connections: dict = None,
environment_variables: dict = None,
name: str = None,
display_name: str = None,
tags: Dict[str, str] = None,
**kwargs,
) -> Run:
"""Run flow against provided data or run.
.. note:: at least one of data or run must be provided.
.. admonition::
Data can be local file or remote path.
- Example:
- `data = "path/to/local/file"`
- `data = "azureml:data_name:data_version"`
- `data = "azureml://datastores/datastore_name/path/to/file"`
- `data = "https://example.com/data.jsonl"`
Column mapping is a mapping from flow input name to specified values.
If specified, the flow will be executed with provided value for specified inputs.
The value can be:
- from data:
- ``data.col1``
- from run:
- ``run.inputs.col1``: if need reference run's inputs
- ``run.output.col1``: if need reference run's outputs
- Example:
- ``{"ground_truth": "${data.answer}", "prediction": "${run.outputs.answer}"}``
:param flow: path to flow directory to run evaluation
:type flow: Union[str, PathLike]
:param data: pointer to test data (of variant bulk runs) for eval runs
:type data: Union[str, PathLike]
:param run: flow run id or flow run, keep lineage between current run and variant runs,
batch outputs can be referenced as ${run.outputs.col_name} in inputs_mapping
:type run: Union[str, ~promptflow.entities.Run]
:param column_mapping: define a data flow logic to map input data.
:type column_mapping: dict
:param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant
if not specified.
:type variant: str
:param connections: Overwrite node level connections with provided value.
Example: ``{"node1": {"connection": "new_connection", "deployment_name": "gpt-35-turbo"}}``
:type connections: dict
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: ``{"key1": "${my_connection.api_key}", "key2"="value2"}``
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
:type environment_variables: dict
:param name: Name of the run.
:type name: str
:param display_name: Display name of the run.
:type display_name: str
:param tags: Tags of the run.
:type tags: Dict[str, str]
:return: flow run info.
:rtype: ~promptflow.entities.Run
"""
# TODO(2887134): support cloud eager Run CRUD
run = Run(
name=name,
display_name=display_name,
tags=tags,
data=data,
column_mapping=column_mapping,
run=run,
variant=variant,
flow=flow,
connections=connections,
environment_variables=environment_variables,
)
return self.runs.create_or_update(run=run, **kwargs)
def stream(self, run: Union[str, Run], raise_on_error: bool = True) -> Run:
"""Stream run logs to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param raise_on_error: Raises an exception if a run fails or canceled.
:type raise_on_error: bool
:return: flow run info.
"""
if isinstance(run, Run):
run = run.name
return self.runs.stream(run, raise_on_error)
def get_details(
self, run: Union[str, Run], max_results: int = MAX_SHOW_DETAILS_RESULTS, all_results: bool = False
) -> "DataFrame":
"""Get the details from the run including inputs and outputs.
.. note::
If `all_results` is set to True, `max_results` will be overwritten to sys.maxsize.
:param run: The run name or run object
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param max_results: The max number of runs to return, defaults to 100
:type max_results: int
:param all_results: Whether to return all results, defaults to False
:type all_results: bool
:raises RunOperationParameterError: If `max_results` is not a positive integer.
:return: The details data frame.
:rtype: pandas.DataFrame
"""
return self.runs.get_details(run=run, max_results=max_results, all_results=all_results)
def get_metrics(self, run: Union[str, Run]) -> dict:
"""Print run metrics to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:return: The run's metrics
:rtype: dict
"""
if isinstance(run, Run):
run = run.name
return self.runs.get_metrics(run=run)
def visualize(self, runs: Union[List[str], List[Run]]) -> None:
"""Visualize run(s).
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
"""
self.runs.visualize(runs)
| promptflow/src/promptflow/promptflow/azure/_pf_client.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_pf_client.py",
"repo_id": "promptflow",
"token_count": 5612
} | 21 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
import functools
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator_async import distributed_trace_async
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._flow_runs_admin_operations import build_batch_update_service_logs_request, build_check_policy_validation_async_request, build_get_storage_info_request, build_log_flow_run_event_request, build_log_flow_run_event_v2_request, build_log_flow_run_terminated_event_request, build_log_result_for_bulk_run_request, build_send_policy_validation_async_request, build_submit_bulk_run_async_request, build_update_service_logs_request
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class FlowRunsAdminOperations:
"""FlowRunsAdminOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace_async
async def submit_bulk_run_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
error_handling_mode: Optional[Union[str, "_models.ErrorHandlingMode"]] = None,
**kwargs: Any
) -> "_models.SubmitBulkRunResponse":
"""submit_bulk_run_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param error_handling_mode:
:type error_handling_mode: str or ~flow.models.ErrorHandlingMode
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SubmitBulkRunResponse, or the result of cls(response)
:rtype: ~flow.models.SubmitBulkRunResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SubmitBulkRunResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_submit_bulk_run_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
error_handling_mode=error_handling_mode,
template_url=self.submit_bulk_run_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('SubmitBulkRunResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_bulk_run_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/submit'} # type: ignore
@distributed_trace_async
async def send_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""send_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_send_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.send_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
send_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def check_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""check_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_check_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.check_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def log_result_for_bulk_run(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> List["_models.KeyValuePairStringObject"]:
"""log_result_for_bulk_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of KeyValuePairStringObject, or the result of cls(response)
:rtype: list[~flow.models.KeyValuePairStringObject]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.KeyValuePairStringObject"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_result_for_bulk_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.log_result_for_bulk_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[KeyValuePairStringObject]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_result_for_bulk_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/LogResult'} # type: ignore
@distributed_trace_async
async def get_storage_info(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.StorageInfo":
"""get_storage_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: StorageInfo, or the result of cls(response)
:rtype: ~flow.models.StorageInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.StorageInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_storage_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_storage_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('StorageInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_storage_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/storageInfo'} # type: ignore
@distributed_trace_async
async def log_flow_run_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: str,
**kwargs: Any
) -> str:
"""log_flow_run_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/runtime/{runtimeVersion}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_event_v2(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: Optional[str] = None,
**kwargs: Any
) -> str:
"""log_flow_run_event_v2.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_v2_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event_v2.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event_v2.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_terminated_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
last_checked_time: Optional[datetime.datetime] = None,
**kwargs: Any
) -> "_models.LogRunTerminatedEventDto":
"""log_flow_run_terminated_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param last_checked_time:
:type last_checked_time: ~datetime.datetime
:keyword callable cls: A custom type or function that will be passed the direct response
:return: LogRunTerminatedEventDto, or the result of cls(response)
:rtype: ~flow.models.LogRunTerminatedEventDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.LogRunTerminatedEventDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_terminated_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
last_checked_time=last_checked_time,
template_url=self.log_flow_run_terminated_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('LogRunTerminatedEventDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_terminated_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logTerminatedEvent'} # type: ignore
@distributed_trace_async
async def update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional["_models.ServiceLogRequest"] = None,
**kwargs: Any
) -> "_models.Task":
"""update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: ~flow.models.ServiceLogRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'ServiceLogRequest')
else:
_json = None
request = build_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs'} # type: ignore
@distributed_trace_async
async def batch_update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional[List["_models.ServiceLogRequest"]] = None,
**kwargs: Any
) -> "_models.Task":
"""batch_update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: list[~flow.models.ServiceLogRequest]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, '[ServiceLogRequest]')
else:
_json = None
request = build_batch_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.batch_update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
batch_update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs/batch'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py",
"repo_id": "promptflow",
"token_count": 12446
} | 22 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_create_connection_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_update_connection_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="PUT",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_connection_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_delete_connection_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="DELETE",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_connection_with_secrets_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/listsecrets')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_list_connections_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_list_connection_specs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/specs')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_list_azure_open_ai_deployments_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/AzureOpenAIDeployments')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"connectionName": _SERIALIZER.url("connection_name", connection_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
# fmt: on
class ConnectionsOperations(object):
"""ConnectionsOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def create_connection(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
body=None, # type: Optional["_models.CreateOrUpdateConnectionRequestDto"]
**kwargs # type: Any
):
# type: (...) -> "_models.ConnectionDto"
"""create_connection.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:param body:
:type body: ~flow.models.CreateOrUpdateConnectionRequestDto
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ConnectionDto, or the result of cls(response)
:rtype: ~flow.models.ConnectionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ConnectionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateOrUpdateConnectionRequestDto')
else:
_json = None
request = build_create_connection_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
content_type=content_type,
json=_json,
template_url=self.create_connection.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('ConnectionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_connection.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}'} # type: ignore
@distributed_trace
def update_connection(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
body=None, # type: Optional["_models.CreateOrUpdateConnectionRequestDto"]
**kwargs # type: Any
):
# type: (...) -> "_models.ConnectionDto"
"""update_connection.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:param body:
:type body: ~flow.models.CreateOrUpdateConnectionRequestDto
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ConnectionDto, or the result of cls(response)
:rtype: ~flow.models.ConnectionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ConnectionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateOrUpdateConnectionRequestDto')
else:
_json = None
request = build_update_connection_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
content_type=content_type,
json=_json,
template_url=self.update_connection.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('ConnectionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_connection.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}'} # type: ignore
@distributed_trace
def get_connection(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.ConnectionDto"
"""get_connection.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ConnectionDto, or the result of cls(response)
:rtype: ~flow.models.ConnectionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ConnectionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_connection_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
template_url=self.get_connection.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('ConnectionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_connection.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}'} # type: ignore
@distributed_trace
def delete_connection(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.ConnectionDto"
"""delete_connection.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ConnectionDto, or the result of cls(response)
:rtype: ~flow.models.ConnectionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ConnectionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_connection_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
template_url=self.delete_connection.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('ConnectionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_connection.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}'} # type: ignore
@distributed_trace
def get_connection_with_secrets(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.ConnectionDto"
"""get_connection_with_secrets.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ConnectionDto, or the result of cls(response)
:rtype: ~flow.models.ConnectionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ConnectionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_connection_with_secrets_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
template_url=self.get_connection_with_secrets.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('ConnectionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_connection_with_secrets.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/listsecrets'} # type: ignore
@distributed_trace
def list_connections(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.ConnectionDto"]
"""list_connections.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of ConnectionDto, or the result of cls(response)
:rtype: list[~flow.models.ConnectionDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.ConnectionDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_connections_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.list_connections.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[ConnectionDto]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_connections.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections'} # type: ignore
@distributed_trace
def list_connection_specs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.WorkspaceConnectionSpec"]
"""list_connection_specs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of WorkspaceConnectionSpec, or the result of cls(response)
:rtype: list[~flow.models.WorkspaceConnectionSpec]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.WorkspaceConnectionSpec"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_connection_specs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.list_connection_specs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[WorkspaceConnectionSpec]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_connection_specs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/specs'} # type: ignore
@distributed_trace
def list_azure_open_ai_deployments(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
connection_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.AzureOpenAIDeploymentDto"]
"""list_azure_open_ai_deployments.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param connection_name:
:type connection_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of AzureOpenAIDeploymentDto, or the result of cls(response)
:rtype: list[~flow.models.AzureOpenAIDeploymentDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.AzureOpenAIDeploymentDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_azure_open_ai_deployments_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
connection_name=connection_name,
template_url=self.list_azure_open_ai_deployments.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[AzureOpenAIDeploymentDto]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_azure_open_ai_deployments.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Connections/{connectionName}/AzureOpenAIDeployments'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_connections_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_connections_operations.py",
"repo_id": "promptflow",
"token_count": 13181
} | 23 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
from .gerneral import is_arm_id
__all__ = ["is_arm_id"]
| promptflow/src/promptflow/promptflow/azure/_utils/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_utils/__init__.py",
"repo_id": "promptflow",
"token_count": 80
} | 24 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
from typing import Any, Dict, List, Mapping, Optional
from promptflow._constants import LINE_NUMBER_KEY
from promptflow._core._errors import UnexpectedError
from promptflow._utils.inputs_mapping_utils import apply_inputs_mapping
from promptflow._utils.load_data import load_data
from promptflow._utils.logger_utils import logger
from promptflow._utils.multimedia_utils import resolve_multimedia_data_recursively
from promptflow._utils.utils import resolve_dir_to_absolute
from promptflow.batch._errors import EmptyInputsData, InputMappingError
from promptflow.contracts.flow import FlowInputDefinition
class BatchInputsProcessor:
def __init__(
self,
working_dir: Path,
flow_inputs: Mapping[str, FlowInputDefinition],
max_lines_count: Optional[int] = None,
):
self._working_dir = working_dir
self._max_lines_count = max_lines_count
self._flow_inputs = flow_inputs
self._default_inputs_mapping = {key: f"${{data.{key}}}" for key in flow_inputs}
def process_batch_inputs(self, input_dirs: Dict[str, str], inputs_mapping: Dict[str, str]):
input_dicts = self._resolve_input_data(input_dirs)
no_input_data = all(len(data) == 0 for data in input_dicts.values())
if no_input_data:
input_dirs_str = "\n".join(f"{input}: {Path(path).as_posix()}" for input, path in input_dirs.items())
message_format = (
"Couldn't find any inputs data at the given input paths. Please review the provided path "
"and consider resubmitting.\n{input_dirs}"
)
raise EmptyInputsData(message_format=message_format, input_dirs=input_dirs_str)
return self._validate_and_apply_inputs_mapping(input_dicts, inputs_mapping)
def _resolve_input_data(self, input_dirs: Dict[str, str]):
"""Resolve input data from input dirs"""
result = {}
for input_key, input_dir in input_dirs.items():
input_dir = resolve_dir_to_absolute(self._working_dir, input_dir)
result[input_key] = self._resolve_data_from_input_path(input_dir)
return result
def _resolve_data_from_input_path(self, input_path: Path):
"""Resolve input data from directory"""
result = []
if input_path.is_file():
result.extend(
resolve_multimedia_data_recursively(
input_path.parent, load_data(local_path=input_path, max_rows_count=self._max_lines_count)
)
)
else:
for input_file in input_path.rglob("*"):
if input_file.is_file():
result.extend(
resolve_multimedia_data_recursively(
input_file.parent, load_data(local_path=input_file, max_rows_count=self._max_lines_count)
)
)
if self._max_lines_count and len(result) >= self._max_lines_count:
break
if self._max_lines_count and len(result) >= self._max_lines_count:
logger.warning(
(
"The data provided exceeds the maximum lines limit. Currently, only the first "
f"{self._max_lines_count} lines are processed."
)
)
return result[: self._max_lines_count]
return result
def _validate_and_apply_inputs_mapping(self, inputs, inputs_mapping) -> List[Dict[str, Any]]:
"""Validate and apply inputs mapping for all lines in the flow.
:param inputs: The inputs to the flow.
:type inputs: Any
:param inputs_mapping: The mapping of input names to their corresponding values.
:type inputs_mapping: Dict[str, Any]
:return: A list of dictionaries containing the resolved inputs for each line in the flow.
:rtype: List[Dict[str, Any]]
"""
if not inputs_mapping:
logger.warning(
msg=(
"Starting run without column mapping may lead to unexpected results. "
"Please consult the following documentation for more information: https://aka.ms/pf/column-mapping"
)
)
inputs_mapping = self._complete_inputs_mapping_by_default_value(inputs_mapping)
resolved_inputs = self._apply_inputs_mapping_for_all_lines(inputs, inputs_mapping)
return resolved_inputs
def _complete_inputs_mapping_by_default_value(self, inputs_mapping):
inputs_mapping = inputs_mapping or {}
result_mapping = self._default_inputs_mapping
# For input has default value, we don't try to read data from default mapping.
# Default value is in higher priority than default mapping.
for key, value in self._flow_inputs.items():
if value and value.default is not None:
del result_mapping[key]
result_mapping.update(inputs_mapping)
return result_mapping
def _apply_inputs_mapping_for_all_lines(
self,
input_dict: Mapping[str, List[Mapping[str, Any]]],
inputs_mapping: Mapping[str, str],
) -> List[Dict[str, Any]]:
"""Apply input mapping to all input lines.
For example:
input_dict = {
'data': [{'question': 'q1', 'answer': 'ans1'}, {'question': 'q2', 'answer': 'ans2'}],
'baseline': [{'answer': 'baseline_ans1'}, {'answer': 'baseline_ans2'}],
'output': [{'answer': 'output_ans1', 'line_number': 0}, {'answer': 'output_ans2', 'line_number': 1}],
}
inputs_mapping: {
"question": "${data.question}", # Question from the data
"groundtruth": "${data.answer}", # Answer from the data
"baseline": "${baseline.answer}", # Answer from the baseline
"deployment_name": "text-davinci-003", # literal value
"answer": "${output.answer}", # Answer from the output
"line_number": "${output.line_number}", # Answer from the output
}
Returns:
[{
"question": "q1",
"groundtruth": "ans1",
"baseline": "baseline_ans1",
"answer": "output_ans1",
"deployment_name": "text-davinci-003",
"line_number": 0,
},
{
"question": "q2",
"groundtruth": "ans2",
"baseline": "baseline_ans2",
"answer": "output_ans2",
"deployment_name": "text-davinci-003",
"line_number": 1,
}]
"""
if inputs_mapping is None:
# This exception should not happen since developers need to use _default_inputs_mapping for None input.
# So, this exception is one system error.
raise UnexpectedError(
message_format=(
"The input for batch run is incorrect. Please make sure to set up a proper input mapping before "
"proceeding. If you need additional help, feel free to contact support for further assistance."
)
)
merged_list = self._merge_input_dicts_by_line(input_dict)
if len(merged_list) == 0:
raise InputMappingError(
message_format=(
"The input for batch run is incorrect. Could not find one complete line on the provided input. "
"Please ensure that you supply data on the same line to resolve this issue."
)
)
result = [apply_inputs_mapping(item, inputs_mapping) for item in merged_list]
return result
def _merge_input_dicts_by_line(
self,
input_dict: Mapping[str, List[Mapping[str, Any]]],
) -> List[Mapping[str, Mapping[str, Any]]]:
for input_key, list_of_one_input in input_dict.items():
if not list_of_one_input:
raise InputMappingError(
message_format=(
"The input for batch run is incorrect. Input from key '{input_key}' is an empty list, "
"which means we cannot generate a single line input for the flow run. "
"Please rectify the input and try again."
),
input_key=input_key,
)
# Check if line numbers are aligned.
all_lengths_without_line_number = {
input_key: len(list_of_one_input)
for input_key, list_of_one_input in input_dict.items()
if not any(LINE_NUMBER_KEY in one_item for one_item in list_of_one_input)
}
if len(set(all_lengths_without_line_number.values())) > 1:
raise InputMappingError(
message_format=(
"The input for batch run is incorrect. Line numbers are not aligned. "
"Some lists have dictionaries missing the 'line_number' key, "
"and the lengths of these lists are different. "
"List lengths are: {all_lengths_without_line_number}. "
"Please make sure these lists have the same length or add 'line_number' key to each dictionary."
),
all_lengths_without_line_number=all_lengths_without_line_number,
)
# Collect each line item from each input.
tmp_dict = {}
for input_key, list_of_one_input in input_dict.items():
if input_key in all_lengths_without_line_number:
# Assume line_number start from 0.
for index, one_line_item in enumerate(list_of_one_input):
if index not in tmp_dict:
tmp_dict[index] = {}
tmp_dict[index][input_key] = one_line_item
else:
for one_line_item in list_of_one_input:
if LINE_NUMBER_KEY in one_line_item:
index = one_line_item[LINE_NUMBER_KEY]
if index not in tmp_dict:
tmp_dict[index] = {}
tmp_dict[index][input_key] = one_line_item
result = []
for line, values_for_one_line in tmp_dict.items():
# Missing input is not acceptable line.
if len(values_for_one_line) != len(input_dict):
continue
values_for_one_line[LINE_NUMBER_KEY] = line
result.append(values_for_one_line)
return result
| promptflow/src/promptflow/promptflow/batch/_batch_inputs_processor.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/batch/_batch_inputs_processor.py",
"repo_id": "promptflow",
"token_count": 4930
} | 25 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from dataclasses import dataclass
class Secret(str):
"""This class is used to hint a parameter is a secret to load."""
def set_secret_name(self, name):
"""Set the secret_name attribute for the Secret instance.
:param name: The name of the secret.
:type name: str
"""
self.secret_name = name
class PromptTemplate(str):
"""This class is used to hint a parameter is a prompt template."""
pass
class FilePath(str):
"""This class is used to hint a parameter is a file path."""
pass
@dataclass
class AssistantDefinition:
"""This class is used to define an assistant definition."""
model: str
instructions: str
tools: list
@staticmethod
def deserialize(data: dict) -> "AssistantDefinition":
return AssistantDefinition(
model=data.get("model", ""), instructions=data.get("instructions", ""), tools=data.get("tools", [])
)
def serialize(self):
return {
"model": self.model,
"instructions": self.instructions,
"tools": self.tools,
}
def init_tool_invoker(self):
from promptflow.executor._assistant_tool_invoker import AssistantToolInvoker
return AssistantToolInvoker.init(self.tools)
| promptflow/src/promptflow/promptflow/contracts/types.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/contracts/types.py",
"repo_id": "promptflow",
"token_count": 503
} | 26 |
from promptflow.exceptions import UserErrorException
class FlowFilePathInvalid(UserErrorException):
pass
| promptflow/src/promptflow/promptflow/executor/_service/_errors.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_service/_errors.py",
"repo_id": "promptflow",
"token_count": 28
} | 27 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from ._cache_storage import AbstractCacheStorage # noqa: F401
from ._run_storage import AbstractBatchRunStorage, AbstractRunStorage # noqa: F401
__all__ = ["AbstractCacheStorage", "AbstractRunStorage", "AbstractBatchRunStorage"]
| promptflow/src/promptflow/promptflow/storage/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/storage/__init__.py",
"repo_id": "promptflow",
"token_count": 87
} | 28 |
Tools are the fundamental building blocks of a [flow](./concept-flows.md).
Each tool is an executable unit, basically a function to performs various tasks including but not limited to:
- Accessing LLMs for various purposes
- Querying databases
- Getting information from search engines
- Pre/post processing of data
# Tools
Prompt flow provides 3 basic tools:
- [LLM](../reference/tools-reference/llm-tool.md): The LLM tool allows you to write custom prompts and leverage large language models to achieve specific goals, such as summarizing articles, generating customer support responses, and more.
- [Python](../reference/tools-reference/python-tool.md): The Python tool enables you to write custom Python functions to perform various tasks, such as fetching web pages, processing intermediate data, calling third-party APIs, and more.
- [Prompt](../reference/tools-reference/prompt-tool.md): The Prompt tool allows you to prepare a prompt as a string for more complex use cases or for use in conjunction with other prompt tools or python tools.
## More tools
Our partners also contributes other useful tools for advanced scenarios, here are some links:
- [Vector DB Lookup](../reference/tools-reference/vector_db_lookup_tool.md): vector search tool that allows users to search top k similar vectors from vector database.
- [Faiss Index Lookup](../reference/tools-reference/faiss_index_lookup_tool.md): querying within a user-provided Faiss-based vector store.
## Custom tools
You can create your own tools that can be shared with your team or anyone in the world.
Learn more on [Create and Use Tool Package](../how-to-guides/develop-a-tool/create-and-use-tool-package.md)
## Next steps
For more information on the available tools and their usage, visit the our [reference doc](../reference/index.md). | promptflow/docs/concepts/concept-tools.md/0 | {
"file_path": "promptflow/docs/concepts/concept-tools.md",
"repo_id": "promptflow",
"token_count": 444
} | 0 |
# Develop a flow
We provide guides on how to develop a flow by writing a flow yaml from scratch in this section.
```{toctree}
:maxdepth: 1
:hidden:
develop-standard-flow
develop-chat-flow
develop-evaluation-flow
referencing-external-files-or-folders-in-a-flow
``` | promptflow/docs/how-to-guides/develop-a-flow/index.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-flow/index.md",
"repo_id": "promptflow",
"token_count": 87
} | 1 |
# Manage connections
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
[Connection](../../concepts/concept-connections.md) helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM (Large Language Models) and other external tools, for example, Azure Content Safety.
:::{note}
To use azureml workspace connection locally, refer to [this guide](../how-to-guides/set-global-configs.md#connectionprovider).
:::
## Connection types
There are multiple types of connections supported in promptflow, which can be simply categorized into **strong type connection** and **custom connection**. The strong type connection includes AzureOpenAIConnection, OpenAIConnection, etc. The custom connection is a generic connection type that can be used to store custom defined credentials.
We are going to use AzureOpenAIConnection as an example for strong type connection, and CustomConnection to show how to manage connections.
## Create a connection
:::{note}
If you are using `WSL` or other OS without default keyring storage backend, you may encounter `StoreConnectionEncryptionKeyError`, please refer to [FAQ](./faq.md#connection-creation-failed-with-storeconnectionencryptionkeyerror) for the solutions.
:::
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Each of the strong type connection has a corresponding yaml schema, the example below shows the AzureOpenAIConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: azure_open_ai_connection
type: azure_open_ai
api_key: "<to-be-replaced>"
api_base: "https://<name>.openai.azure.com/"
api_type: "azure"
api_version: "2023-03-15-preview"
```
The custom connection yaml will have two dict fields for secrets and configs, the example below shows the CustomConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
endpoint: "<your-endpoint>"
other_config: "other_value"
secrets: # required
my_key: "<your-api-key>"
```
After preparing the yaml file, use the CLI command below to create them:
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f <path-to-azure-open-ai-connection> --set api_key=<your-api-key>
# Create the custom connection
pf connection create -f <path-to-custom-connection> --set configs.endpoint=<endpoint> secrets.my_key=<your-api-key>
```
The expected result is as follows if the connection created successfully.
![img](../media/how-to-guides/create_connection.png)
:::
:::{tab-item} SDK
:sync: SDK
Using SDK, each connection type has a corresponding class to create a connection. The following code snippet shows how to import the required class and create the connection:
```python
from promptflow import PFClient
from promptflow.entities import AzureOpenAIConnection, CustomConnection
# Get a pf client to manage connections
pf = PFClient()
# Initialize an AzureOpenAIConnection object
connection = AzureOpenAIConnection(
name="my_azure_open_ai_connection",
api_key="<your-api-key>",
api_base="<your-endpoint>"
api_version="2023-03-15-preview"
)
# Create the connection, note that api_key will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
# Initialize a custom connection object
connection = CustomConnection(
name="my_custom_connection",
# Secrets is a required field for custom connection
secrets={"my_key": "<your-api-key>"},
configs={"endpoint": "<your-endpoint>", "other_config": "other_value"}
)
# Create the connection, note that all secret values will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Click the "+" icon on the top right of it and follow the popped out instructions to create your new connection.
![img](../media/how-to-guides/vscode_create_connection.png)
![img](../media/how-to-guides/vscode_create_connection_1.png)
:::
::::
## Update a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
The commands below show how to update existing connections with new values:
```bash
# Update an azure open ai connection with a new api base
pf connection update -n my_azure_open_ai_connection --set api_base='new_value'
# Update a custom connection
pf connection update -n my_custom_connection --set configs.other_config='new_value'
```
:::
:::{tab-item} SDK
:sync: SDK
The code snippet below shows how to update existing connections with new values:
```python
# Update an azure open ai connection with a new api base
connection = pf.connections.get(name="my_azure_open_ai_connection")
connection.api_base = "new_value"
connection.api_key = "<original-key>" # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
# Update a custom connection
connection = pf.connections.get(name="my_custom_connection")
connection.configs["other_config"] = "new_value"
connection.secrets = {"key1": "val1"} # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## List connections
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List connection command will return the connections with json list format, note that all secrets and api keys will be scrubbed:
```bash
pf connection list
```
:::
:::{tab-item} SDK
:sync: SDK
List connection command will return the connections object list, note that all secrets and api keys will be scrubbed:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# List and print connections
connection_list = pf.connections.list()
for connection in connection_list:
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_list_connection.png)
:::
::::
## Delete a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Delete a connection with the following command:
```bash
pf connection delete -n <connection_name>
```
:::
:::{tab-item} SDK
:sync: SDK
Delete a connection with the following code snippet:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# Delete the connection with specific name
client.connections.delete(name="my_custom_connection")
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## Next steps
- Reach more detail about [connection concepts](../../concepts/concept-connections.md).
- Try the [connection samples](https://github.com/microsoft/promptflow/blob/main/examples/connections/connection.ipynb).
- [Consume connections from Azure AI](../cloud/azureai/consume-connections-from-azure-ai.md).
| promptflow/docs/how-to-guides/manage-connections.md/0 | {
"file_path": "promptflow/docs/how-to-guides/manage-connections.md",
"repo_id": "promptflow",
"token_count": 2251
} | 2 |
from datetime import datetime
import time
import requests
import sys
import json
from azure.identity import AzureCliCredential
import logging
from azure.ai.ml import MLClient
from sseclient import SSEClient
class ColoredFormatter(logging.Formatter):
# Color code dictionary
color_codes = {
'debug': '\033[0;32m', # Green
'info': '\033[0;36m', # Cyan
'warning': '\033[0;33m', # Yellow
'error': '\033[0;31m', # Red
'critical': '\033[0;35m', # Magenta
}
def format(self, record):
# Get the original message
message = super().format(record)
# Add color codes
message = f"{self.color_codes.get(record.levelname.lower(), '')}{message}\033[0m"
return message
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.setFormatter(ColoredFormatter())
logger.setLevel(logging.INFO)
logger.addHandler(handler)
def apply_delta(base: dict, delta: dict):
for k, v in delta.items():
if k in base:
base[k] += v
else:
base[k] = v
def score(url, api_key, body, stream=True, on_event=None):
headers = {
"Content-Type": "application/json",
"Authorization": ("Bearer " + api_key),
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
"azureml-model-deployment": "blue",
"Accept": "text/event-stream, application/json" if stream else "application/json"
}
logger.info("Sending HTTP request...")
logger.debug("POST %s", url)
for name, value in headers.items():
if name == "Authorization":
value = "[REDACTED]"
logger.debug(f">>> {name}: {value}")
logger.debug(json.dumps(body, indent=4, ensure_ascii=False))
logger.debug("")
time1 = datetime.now()
response = None
try:
response = requests.post(url, json=body, headers=headers, stream=stream)
response.raise_for_status()
finally:
time2 = datetime.now()
if response is not None:
logger.info(
"Got response: %d %s (elapsed %s)",
response.status_code,
response.reason,
time2 - time1,
)
for name, value in response.headers.items():
logger.debug(f"<<< {name}: {value}")
time1 = datetime.now()
try:
content_type = response.headers.get('Content-Type')
if "text/event-stream" in content_type:
output = {}
client = SSEClient(response)
for event in client.events():
if on_event:
on_event(event)
dct = json.loads(event.data)
apply_delta(output, dct)
return output, True
else:
return response.json(), False
finally:
time2 = datetime.now()
logger.info("\nResponse reading elapsed: %s", time2 - time1)
class ChatApp:
def __init__(self, ml_client, endpoint_name, chat_input_name, chat_output_name, stream=True, debug=False):
self._chat_input_name = chat_input_name
self._chat_output_name = chat_output_name
self._chat_history = []
self._stream = stream
if debug:
logger.setLevel(logging.DEBUG)
logger.info("Getting endpoint info...")
endpoint = ml_client.online_endpoints.get(endpoint_name)
keys = ml_client.online_endpoints.get_keys(endpoint_name)
self._endpoint_url = endpoint.scoring_uri
self._endpoint_key = keys.primary_key if endpoint.auth_mode == "key" else keys.access_token
logger.info(f"Done.")
logger.debug(f"Target endpoint: {endpoint.id}")
@property
def url(self):
return self._endpoint_url
@property
def api_key(self):
return self._endpoint_key
def get_payload(self, chat_input, chat_history=[]):
return {
self._chat_input_name: chat_input,
"chat_history": chat_history,
}
def chat_once(self, chat_input):
def on_event(event):
dct = json.loads(event.data)
answer_delta = dct.get(self._chat_output_name)
if answer_delta:
print(answer_delta, end='')
# We need to flush the output
# otherwise the text does not appear on the console
# unless a new line comes.
sys.stdout.flush()
# Sleep for 20ms for better animation effects
time.sleep(0.02)
try:
payload = self.get_payload(chat_input=chat_input, chat_history=self._chat_history)
output, stream = score(self.url, self.api_key, payload, stream=self._stream, on_event=on_event)
# We don't use self._stream here since the result may not always be the same as self._stream specified.
if stream:
# Print a new line at the end of the content to make sure
# the next logger line will always starts from a new line.
pass
# print("\n")
else:
print(output.get(self._chat_output_name, "<empty>"))
self._chat_history.append({
"inputs": {
self._chat_input_name: chat_input,
},
"outputs": output,
})
logger.info("Length of chat history: %s", len(self._chat_history))
except requests.HTTPError as e:
logger.error(e.response.text)
def chat(self):
while True:
try:
question = input("Chat with Wikipedia:> ")
if question in ("exit", "bye"):
print("Bye.")
break
self.chat_once(question)
except KeyboardInterrupt:
# When pressed Ctrl_C, exit
print("\nBye.")
break
except Exception as e:
logger.exception("An error occurred: %s", e)
# Do not raise the errors out so that we can continue the chat
if __name__ == "__main__":
ml_client = MLClient(
credential=AzureCliCredential(),
# Replace with your subscription ID, resource group name, and workspace name
subscription_id="<your_sub_id>",
resource_group_name="<your_resource_group_name>",
workspace_name="<your_workspace_name>",
)
chat_app = ChatApp(
ml_client=ml_client,
# TODO: Replace with your online endpoint name
endpoint_name="chat-with-wikipedia-stream",
chat_input_name="question",
chat_output_name="answer",
stream=False,
debug=True,
)
chat_app.chat()
| promptflow/docs/media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py/0 | {
"file_path": "promptflow/docs/media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py",
"repo_id": "promptflow",
"token_count": 3156
} | 3 |
# PLACEHOLDER | promptflow/docs/reference/python-library-reference/promptflow.md/0 | {
"file_path": "promptflow/docs/reference/python-library-reference/promptflow.md",
"repo_id": "promptflow",
"token_count": 6
} | 4 |
include promptflow/tools/yamls/*.yaml | promptflow/src/promptflow-tools/MANIFEST.in/0 | {
"file_path": "promptflow/src/promptflow-tools/MANIFEST.in",
"repo_id": "promptflow",
"token_count": 13
} | 5 |
import json
import sys
from enum import Enum
import requests
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import ToolProvider, tool
from promptflow.connections import SerpConnection
from promptflow.exceptions import PromptflowException
from promptflow.tools.exception import SerpAPIUserError, SerpAPISystemError
class SafeMode(str, Enum):
ACTIVE = "active"
OFF = "off"
class Engine(str, Enum):
GOOGLE = "google"
BING = "bing"
class SerpAPI(ToolProvider):
def __init__(self, connection: SerpConnection):
super().__init__()
self.connection = connection
def extract_error_message_from_json(self, error_data):
error_message = ""
# For request was rejected. For example, the api_key is not valid
if "error" in error_data:
error_message = error_data["error"]
return str(error_message)
def safe_extract_error_message(self, response):
default_error_message = f"SerpAPI search request failed: {response.text}"
try:
# Keep the same style as SerpAPIClient
error_data = json.loads(response.text)
print(f"Response text json: {json.dumps(error_data)}", file=sys.stderr)
error_message = self.extract_error_message_from_json(error_data)
error_message = error_message if len(error_message) > 0 else default_error_message
return error_message
except Exception as e:
# Swallow any exception when extract detailed error message
print(
f"Unexpected exception occurs while extract error message "
f"from response: {type(e).__name__}: {str(e)}",
file=sys.stderr,
)
return default_error_message
# flake8: noqa: C901
@tool
def search(
self,
query: str, # this is required
location: str = None,
safe: SafeMode = SafeMode.OFF, # Not default to be SafeMode.OFF
num: int = 10,
engine: Engine = Engine.GOOGLE, # this is required
):
from serpapi import SerpApiClient
# required parameters. https://serpapi.com/search-api.
params = {
"q": query,
"location": location,
"api_key": self.connection.api_key,
}
if isinstance(engine, Engine):
params["engine"] = engine.value
else:
params["engine"] = engine
if safe == SafeMode.ACTIVE:
# Ingore invalid value and safe="off" (as default)
# For bing and google, they use diff parameters
if params["engine"].lower() == "google":
params["safe"] = "Active"
else:
params["safeSearch"] = "Strict"
if int(num) > 0:
# to combine multiple engines together, we use "num" as the parameter for such purpose
if params["engine"].lower() == "google":
params["num"] = int(num)
else:
params["count"] = int(num)
search = SerpApiClient(params)
# get response
try:
response = search.get_response()
if response.status_code == requests.codes.ok:
# default output is json
return json.loads(response.text)
else:
# Step I: Try to get accurate error message at best
error_message = self.safe_extract_error_message(response)
# Step II: Construct PromptflowException
if response.status_code >= 500:
raise SerpAPISystemError(message=error_message)
else:
raise SerpAPIUserError(message=error_message)
except Exception as e:
# SerpApi is super robust. Set basic error handle
if not isinstance(e, PromptflowException):
print(f"Unexpected exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
error_message = f"SerpAPI search request failed: {type(e).__name__}: {str(e)}"
raise SerpAPISystemError(message=error_message)
raise
@tool
def search(
connection: SerpConnection,
query: str, # this is required
location: str = None,
safe: SafeMode = SafeMode.OFF, # Not default to be SafeMode.OFF
num: int = 10,
engine: Engine = Engine.GOOGLE, # this is required
):
return SerpAPI(connection).search(
query=query,
location=location,
safe=safe,
num=num,
engine=engine,
)
| promptflow/src/promptflow-tools/promptflow/tools/serpapi.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/serpapi.py",
"repo_id": "promptflow",
"token_count": 2097
} | 6 |
{
"azure_open_ai_connection": {
"type": "AzureOpenAIConnection",
"value": {
"api_key": "aoai-api-key",
"api_base": "aoai-api-endpoint",
"api_type": "azure",
"api_version": "2023-07-01-preview"
},
"module": "promptflow.connections"
},
"bing_config": {
"type": "BingConnection",
"value": {
"api_key": "bing-api-key"
},
"module": "promptflow.connections"
},
"bing_connection": {
"type": "BingConnection",
"value": {
"api_key": "bing-api-key"
},
"module": "promptflow.connections"
},
"azure_content_safety_config": {
"type": "AzureContentSafetyConnection",
"value": {
"api_key": "content-safety-api-key",
"endpoint": "https://content-safety-canary-test.cognitiveservices.azure.com",
"api_version": "2023-04-30-preview"
},
"module": "promptflow.connections"
},
"serp_connection": {
"type": "SerpConnection",
"value": {
"api_key": "serpapi-api-key"
},
"module": "promptflow.connections"
},
"translate_connection": {
"type": "CustomConnection",
"value": {
"api_key": "<your-key>",
"api_endpoint": "https://api.cognitive.microsofttranslator.com/",
"api_region": "global"
},
"module": "promptflow.connections",
"module": "promptflow.connections",
"secret_keys": [
"api_key"
]
},
"custom_connection": {
"type": "CustomConnection",
"value": {
"key1": "hey",
"key2": "val2"
},
"module": "promptflow.connections",
"secret_keys": [
"key1"
]
},
"custom_strong_type_connection": {
"type": "CustomConnection",
"value": {
"api_key": "<your-key>",
"api_base": "This is my first custom connection.",
"promptflow.connection.custom_type": "MyFirstConnection",
"promptflow.connection.module": "my_tool_package.connections"
},
"module": "promptflow.connections",
"secret_keys": [
"api_key"
]
},
"open_ai_connection": {
"type": "OpenAIConnection",
"value": {
"api_key": "<your-key>",
"organization": "<your-organization>"
},
"module": "promptflow.connections"
}
}
| promptflow/src/promptflow/dev-connections.json.example/0 | {
"file_path": "promptflow/src/promptflow/dev-connections.json.example",
"repo_id": "promptflow",
"token_count": 988
} | 7 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# pylint: disable=wrong-import-position
import json
import time
from promptflow._cli._pf._experiment import add_experiment_parser, dispatch_experiment_commands
from promptflow._cli._utils import _get_cli_activity_name
from promptflow._sdk._configuration import Configuration
from promptflow._sdk._telemetry import ActivityType, get_telemetry_logger, log_activity
from promptflow._sdk._telemetry.activity import update_activity_name
# Log the start time
start_time = time.perf_counter()
# E402 module level import not at top of file
import argparse # noqa: E402
import logging # noqa: E402
import sys # noqa: E402
from promptflow._cli._pf._config import add_config_parser, dispatch_config_commands # noqa: E402
from promptflow._cli._pf._connection import add_connection_parser, dispatch_connection_commands # noqa: E402
from promptflow._cli._pf._flow import add_flow_parser, dispatch_flow_commands # noqa: E402
from promptflow._cli._pf._run import add_run_parser, dispatch_run_commands # noqa: E402
from promptflow._cli._pf._tool import add_tool_parser, dispatch_tool_commands # noqa: E402
from promptflow._cli._pf.help import show_privacy_statement, show_welcome_message # noqa: E402
from promptflow._cli._pf._upgrade import add_upgrade_parser, upgrade_version # noqa: E402
from promptflow._cli._user_agent import USER_AGENT # noqa: E402
from promptflow._sdk._utils import ( # noqa: E402
get_promptflow_sdk_version,
print_pf_version,
setup_user_agent_to_operation_context,
)
from promptflow._utils.logger_utils import get_cli_sdk_logger # noqa: E402
# get logger for CLI
logger = get_cli_sdk_logger()
def run_command(args):
# Log the init finish time
init_finish_time = time.perf_counter()
try:
# --verbose, enable info logging
if hasattr(args, "verbose") and args.verbose:
for handler in logger.handlers:
handler.setLevel(logging.INFO)
# --debug, enable debug logging
if hasattr(args, "debug") and args.debug:
for handler in logger.handlers:
handler.setLevel(logging.DEBUG)
if args.version:
print_pf_version()
elif args.action == "flow":
dispatch_flow_commands(args)
elif args.action == "connection":
dispatch_connection_commands(args)
elif args.action == "run":
dispatch_run_commands(args)
elif args.action == "config":
dispatch_config_commands(args)
elif args.action == "tool":
dispatch_tool_commands(args)
elif args.action == "upgrade":
upgrade_version(args)
elif args.action == "experiment":
dispatch_experiment_commands(args)
except KeyboardInterrupt as ex:
logger.debug("Keyboard interrupt is captured.")
# raise UserErrorException(error=ex)
# Cant't raise UserErrorException due to the code exit(1) of promptflow._cli._utils.py line 368.
raise ex
except SystemExit as ex: # some code directly call sys.exit, this is to make sure command metadata is logged
exit_code = ex.code if ex.code is not None else 1
logger.debug(f"Code directly call sys.exit with code {exit_code}")
# raise UserErrorException(error=ex)
# Cant't raise UserErrorException due to the code exit(1) of promptflow._cli._utils.py line 368.
raise ex
except Exception as ex:
logger.debug(f"Command {args} execute failed. {str(ex)}")
# raise UserErrorException(error=ex)
# Cant't raise UserErrorException due to the code exit(1) of promptflow._cli._utils.py line 368.
raise ex
finally:
# Log the invoke finish time
invoke_finish_time = time.perf_counter()
logger.info(
"Command ran in %.3f seconds (init: %.3f, invoke: %.3f)",
invoke_finish_time - start_time,
init_finish_time - start_time,
invoke_finish_time - init_finish_time,
)
def get_parser_args(argv):
parser = argparse.ArgumentParser(
prog="pf",
formatter_class=argparse.RawDescriptionHelpFormatter,
description="pf: manage prompt flow assets. Learn more: https://microsoft.github.io/promptflow.",
)
parser.add_argument(
"-v", "--version", dest="version", action="store_true", help="show current CLI version and exit"
)
subparsers = parser.add_subparsers()
add_upgrade_parser(subparsers)
add_flow_parser(subparsers)
add_connection_parser(subparsers)
add_run_parser(subparsers)
add_config_parser(subparsers)
add_tool_parser(subparsers)
if Configuration.get_instance().is_internal_features_enabled():
add_experiment_parser(subparsers)
return parser.prog, parser.parse_args(argv)
def entry(argv):
"""
Control plane CLI tools for promptflow.
"""
prog, args = get_parser_args(argv)
if hasattr(args, "user_agent"):
setup_user_agent_to_operation_context(args.user_agent)
logger = get_telemetry_logger()
activity_name = _get_cli_activity_name(cli=prog, args=args)
activity_name = update_activity_name(activity_name, args=args)
with log_activity(
logger,
activity_name,
activity_type=ActivityType.PUBLICAPI,
):
run_command(args)
def main():
"""Entrance of pf CLI."""
command_args = sys.argv[1:]
if len(command_args) == 1 and command_args[0] == "version":
version_dict = {"promptflow": get_promptflow_sdk_version()}
version_dict_string = (
json.dumps(version_dict, ensure_ascii=False, indent=2, sort_keys=True, separators=(",", ": ")) + "\n"
)
print(version_dict_string)
return
if len(command_args) == 0:
# print privacy statement & welcome message like azure-cli
show_privacy_statement()
show_welcome_message()
command_args.append("-h")
elif len(command_args) == 1:
# pf only has "pf --version" with 1 layer
if command_args[0] not in ["--version", "-v", "upgrade"]:
command_args.append("-h")
setup_user_agent_to_operation_context(USER_AGENT)
entry(command_args)
if __name__ == "__main__":
main()
| promptflow/src/promptflow/promptflow/_cli/_pf/entry.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/entry.py",
"repo_id": "promptflow",
"token_count": 2510
} | 8 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
{% for arg, typ in flow_inputs.items() %}
{{ arg }}:
type: {{ typ }}
{% endfor %}
outputs:
output:
type: string
reference: {% raw %}${{% endraw %}{{ main_node_name }}.output}
nodes:
{% for param_name, file in prompt_params.items() %}
- name: {{ param_name }}
type: prompt
source:
type: code
path: {{ file }}
inputs: # Please check the generated prompt inputs
{% for arg in prompt_inputs[param_name].keys() %}
{{ arg }}: ${inputs.{{ arg }}}
{% endfor %}
{% endfor %}
- name: {{ main_node_name }}
type: python
source:
type: code
path: {{ tool_file }}
inputs:
{# Below are node inputs link to flow inputs #}
{% for arg in func_params.keys() %}
{{ arg }}: ${inputs.{{ arg }}}
{% endfor %}
{# Below are node prompt template inputs from prompt nodes #}
{% for param_name, file in prompt_params.items() %}
{{ param_name }}: {% raw %}${{% endraw %}{{ param_name }}.output}
{% endfor %}
connection: custom_connection
{% if setup_sh or python_requirements_txt %}
environment:
{% if setup_sh %}
setup_sh: {{ setup_sh }}
{% endif %}
{% if python_requirements_txt %}
python_requirements_txt: {{ python_requirements_txt }}
{% endif %}
{% endif %}
| promptflow/src/promptflow/promptflow/_cli/data/entry_flow/flow.dag.yaml.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/entry_flow/flow.dag.yaml.jinja2",
"repo_id": "promptflow",
"token_count": 517
} | 9 |
{"text": "Hello World!"}
| promptflow/src/promptflow/promptflow/_cli/data/standard_flow/data.jsonl/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/standard_flow/data.jsonl",
"repo_id": "promptflow",
"token_count": 9
} | 10 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import json
from contextvars import ContextVar
from datetime import datetime, timezone
from types import GeneratorType
from typing import Any, Dict, List, Mapping, Optional, Union
from promptflow._core._errors import FlowOutputUnserializable, RunRecordNotFound, ToolCanceledError
from promptflow._core.log_manager import NodeLogManager
from promptflow._core.thread_local_singleton import ThreadLocalSingleton
from promptflow._utils.dataclass_serializer import serialize
from promptflow._utils.exception_utils import ExceptionPresenter
from promptflow._utils.logger_utils import flow_logger
from promptflow._utils.multimedia_utils import default_json_encoder
from promptflow._utils.openai_metrics_calculator import OpenAIMetricsCalculator
from promptflow._utils.run_tracker_utils import _deep_copy_and_extract_items_from_generator_proxy
from promptflow.contracts.run_info import FlowRunInfo, RunInfo, Status
from promptflow.contracts.run_mode import RunMode
from promptflow.contracts.tool import ConnectionType
from promptflow.exceptions import ErrorTarget
from promptflow.storage import AbstractRunStorage
from promptflow.storage._run_storage import DummyRunStorage
class RunTracker(ThreadLocalSingleton):
RUN_CONTEXT_NAME = "CurrentRun"
CONTEXT_VAR_NAME = "RunTracker"
context_var = ContextVar(CONTEXT_VAR_NAME, default=None)
@staticmethod
def init_dummy() -> "RunTracker":
return RunTracker(DummyRunStorage())
def __init__(self, run_storage: AbstractRunStorage, run_mode: RunMode = RunMode.Test, node_log_manager=None):
self._node_runs: Dict[str, RunInfo] = {}
self._flow_runs: Dict[str, FlowRunInfo] = {}
self._current_run_id = ""
self._run_context = ContextVar(self.RUN_CONTEXT_NAME, default="")
self._storage = run_storage
self._debug = True # TODO: Make this configurable
self.node_log_manager = node_log_manager or NodeLogManager()
self._has_failed_root_run = False
self._run_mode = run_mode
self._allow_generator_types = False
@property
def allow_generator_types(self):
return self._allow_generator_types
@allow_generator_types.setter
def allow_generator_types(self, value: bool):
self._allow_generator_types = value
@property
def node_run_list(self):
# Add list() to make node_run_list a new list object,
# therefore avoid iterating over a dictionary, which might be updated by another thread.
return list(self._node_runs.values())
@property
def flow_run_list(self):
# Add list() to make flow_run_list a new list object,
# therefore avoid iterating over a dictionary, which might be updated by another thread.
return list(self._flow_runs.values())
def set_current_run_in_context(self, run_id: str):
self._run_context.set(run_id)
def get_current_run_in_context(self) -> str:
return self._run_context.get()
def start_flow_run(
self,
flow_id,
root_run_id,
run_id,
parent_run_id="",
inputs=None,
index=None,
variant_id="",
) -> FlowRunInfo:
"""Create a flow run and save to run storage on demand."""
run_info = FlowRunInfo(
run_id=run_id,
status=Status.Running,
error=None,
inputs=inputs,
output=None,
metrics=None,
request=None,
parent_run_id=parent_run_id,
root_run_id=root_run_id,
source_run_id=None,
flow_id=flow_id,
start_time=datetime.utcnow(),
end_time=None,
index=index,
variant_id=variant_id,
)
self.persist_flow_run(run_info)
self._flow_runs[run_id] = run_info
self._current_run_id = run_id
return run_info
def start_node_run(
self,
node,
flow_run_id,
parent_run_id,
run_id,
index,
):
run_info = RunInfo(
node=node,
run_id=run_id,
flow_run_id=flow_run_id,
status=Status.Running,
inputs=None,
output=None,
metrics=None,
error=None,
parent_run_id=parent_run_id,
start_time=datetime.utcnow(),
end_time=None,
)
self._node_runs[run_id] = run_info
self._current_run_id = run_id
self.set_current_run_in_context(run_id)
self.node_log_manager.set_node_context(run_id, node, index)
return run_info
def bypass_node_run(
self,
node,
flow_run_id,
parent_run_id,
run_id,
index,
variant_id,
):
run_info = RunInfo(
node=node,
run_id=run_id,
flow_run_id=flow_run_id,
parent_run_id=parent_run_id,
status=Status.Bypassed,
inputs=None,
output=None,
metrics=None,
error=None,
start_time=datetime.utcnow(),
end_time=datetime.utcnow(),
result=None,
index=index,
variant_id=variant_id,
api_calls=[],
)
self._node_runs[run_id] = run_info
return run_info
def _flow_run_postprocess(self, run_info: FlowRunInfo, output, ex: Optional[Exception]):
if output:
try:
self._assert_flow_output_serializable(output)
except Exception as e:
output, ex = None, e
self._common_postprocess(run_info, output, ex)
def _update_flow_run_info_with_node_runs(self, run_info: FlowRunInfo):
run_id = run_info.run_id
child_run_infos = self.collect_child_node_runs(run_id)
run_info.system_metrics = run_info.system_metrics or {}
run_info.system_metrics.update(self.collect_metrics(child_run_infos, self.OPENAI_AGGREGATE_METRICS))
# TODO: Refactor Tracer to support flow level tracing,
# then we can remove the hard-coded root level api_calls here.
# It has to be a list for UI backward compatibility.
start_timestamp = run_info.start_time.astimezone(timezone.utc).timestamp() if run_info.start_time else None
end_timestamp = run_info.end_time.astimezone(timezone.utc).timestamp() if run_info.end_time else None
# This implementation deep copies the inputs and output of the flow run, and extracts items from GeneratorProxy.
# So that both image and generator will be supported.
# It's a short term solution, while the long term one will be implemented in the next generation of Tracer.
inputs = None
output = None
try:
inputs = _deep_copy_and_extract_items_from_generator_proxy(run_info.inputs)
output = _deep_copy_and_extract_items_from_generator_proxy(run_info.output)
except Exception as e:
flow_logger.warning(
f"Failed to serialize inputs or output for flow run because of {e}."
"The inputs and output field in api_calls will be None."
)
run_info.api_calls = [
{
"name": "flow",
"node_name": "flow",
"type": "Flow",
"start_time": start_timestamp,
"end_time": end_timestamp,
"children": self._collect_traces_from_nodes(run_id),
"system_metrics": run_info.system_metrics,
"inputs": inputs,
"output": output,
"error": run_info.error,
}
]
def _node_run_postprocess(self, run_info: RunInfo, output, ex: Optional[Exception]):
run_id = run_info.run_id
self.set_openai_metrics(run_id)
logs = self.node_log_manager.get_logs(run_id)
run_info.logs = logs
self.node_log_manager.clear_node_context(run_id)
if run_info.inputs:
run_info.inputs = self._ensure_inputs_is_json_serializable(run_info.inputs, run_info.node)
if output is not None:
msg = f"Output of {run_info.node} is not json serializable, use str to store it."
output = self._ensure_serializable_value(output, msg)
self._common_postprocess(run_info, output, ex)
def _common_postprocess(self, run_info, output, ex):
if output is not None:
# Duplicated fields for backward compatibility.
run_info.result = output
run_info.output = output
if ex is not None:
self._enrich_run_info_with_exception(run_info=run_info, ex=ex)
else:
run_info.status = Status.Completed
run_info.end_time = datetime.utcnow()
if not isinstance(run_info.start_time, datetime):
flow_logger.warning(
f"Run start time {run_info.start_time} for {run_info.run_id} is not a datetime object, "
f"got {run_info.start_time}, type={type(run_info.start_time)}."
)
else:
duration = (run_info.end_time - run_info.start_time).total_seconds()
run_info.system_metrics = run_info.system_metrics or {}
run_info.system_metrics["duration"] = duration
def cancel_node_runs(self, msg: str, flow_run_id):
node_runs = self.collect_node_runs(flow_run_id)
for node_run_info in node_runs:
if node_run_info.status != Status.Running:
continue
msg = msg.rstrip(".") # Avoid duplicated "." in the end of the message.
err = ToolCanceledError(
message_format="Tool execution is canceled because of the error: {msg}.",
msg=msg,
target=ErrorTarget.EXECUTOR,
)
self.end_run(node_run_info.run_id, ex=err)
node_run_info.status = Status.Canceled
self.persist_node_run(node_run_info)
def end_run(
self,
run_id: str,
*,
result: Optional[dict] = None,
ex: Optional[Exception] = None,
traces: Optional[List] = None,
):
run_info = self._flow_runs.get(run_id) or self._node_runs.get(run_id)
if run_info is None:
raise RunRecordNotFound(
message_format=(
"Run record with ID '{run_id}' was not tracked in promptflow execution. "
"Please contact support for further assistance."
),
target=ErrorTarget.RUN_TRACKER,
run_id=run_id,
)
# If the run is already canceled, do nothing.
if run_info.status == Status.Canceled:
return run_info
if isinstance(run_info, FlowRunInfo):
self._flow_run_postprocess(run_info, result, ex)
if traces:
run_info.api_calls = traces
elif isinstance(run_info, RunInfo):
run_info.api_calls = traces
self._node_run_postprocess(run_info, result, ex)
return run_info
def _ensure_serializable_value(self, val, warning_msg: Optional[str] = None):
if ConnectionType.is_connection_value(val):
return ConnectionType.serialize_conn(val)
if self.allow_generator_types and isinstance(val, GeneratorType):
return str(val)
try:
json.dumps(val, default=default_json_encoder)
return val
except Exception:
if not warning_msg:
raise
flow_logger.warning(warning_msg)
return repr(val)
def _ensure_inputs_is_json_serializable(self, inputs: dict, node_name: str) -> dict:
return {
k: self._ensure_serializable_value(
v, f"Input '{k}' of {node_name} is not json serializable, use str to store it."
)
for k, v in inputs.items()
}
def _assert_flow_output_serializable(self, output: Any) -> Any:
def _wrap_serializable_error(value):
try:
return self._ensure_serializable_value(value)
except Exception as e:
# If a specific key-value pair is not serializable, raise an exception with the key.
error_type_and_message = f"({e.__class__.__name__}) {e}"
message_format = (
"The output '{output_name}' for flow is incorrect. The output value is not JSON serializable. "
"JSON dump failed: {error_type_and_message}. Please verify your flow output and "
"make sure the value serializable."
)
raise FlowOutputUnserializable(
message_format=message_format,
target=ErrorTarget.FLOW_EXECUTOR,
output_name=k,
error_type_and_message=error_type_and_message,
) from e
# support primitive outputs in eager mode
if not isinstance(output, dict):
return _wrap_serializable_error(output)
serializable_output = {}
for k, v in output.items():
serializable_output[k] = _wrap_serializable_error(v)
return serializable_output
def _enrich_run_info_with_exception(self, run_info: Union[RunInfo, FlowRunInfo], ex: Exception):
"""Update exception details into run info."""
# Update status to Cancelled the run terminates because of KeyboardInterruption or CancelledError.
if isinstance(ex, KeyboardInterrupt) or isinstance(ex, asyncio.CancelledError):
run_info.status = Status.Canceled
else:
run_info.error = ExceptionPresenter.create(ex).to_dict(include_debug_info=self._debug)
run_info.status = Status.Failed
def collect_all_run_infos_as_dicts(self) -> Mapping[str, List[Mapping[str, Any]]]:
flow_runs = self.flow_run_list
node_runs = self.node_run_list
return {
"flow_runs": [serialize(run) for run in flow_runs],
"node_runs": [serialize(run) for run in node_runs],
}
def collect_node_runs(self, flow_run_id: Optional[str] = None) -> List[RunInfo]:
"""If flow_run_id is None, return all node runs."""
if flow_run_id:
return [run_info for run_info in self.node_run_list if run_info.flow_run_id == flow_run_id]
return [run_info for run_info in self.node_run_list]
def collect_child_node_runs(self, parent_run_id: str) -> List[RunInfo]:
return [run_info for run_info in self.node_run_list if run_info.parent_run_id == parent_run_id]
def ensure_run_info(self, run_id: str) -> Union[RunInfo, FlowRunInfo]:
run_info = self._node_runs.get(run_id) or self._flow_runs.get(run_id)
if run_info is None:
raise RunRecordNotFound(
message_format=(
"Run record with ID '{run_id}' was not tracked in promptflow execution. "
"Please contact support for further assistance."
),
target=ErrorTarget.RUN_TRACKER,
run_id=run_id,
)
return run_info
def set_inputs(self, run_id: str, inputs: Mapping[str, Any]):
run_info = self.ensure_run_info(run_id)
run_info.inputs = inputs
def set_openai_metrics(self, run_id: str):
# TODO: Provide a common implementation for different internal metrics
run_info = self.ensure_run_info(run_id)
calls = run_info.api_calls or []
total_metrics = {}
calculator = OpenAIMetricsCalculator(flow_logger)
for call in calls:
metrics = calculator.get_openai_metrics_from_api_call(call)
calculator.merge_metrics_dict(total_metrics, metrics)
run_info.system_metrics = run_info.system_metrics or {}
run_info.system_metrics.update(total_metrics)
def _collect_traces_from_nodes(self, run_id):
child_run_infos = self.collect_child_node_runs(run_id)
traces = []
for node_run_info in child_run_infos:
traces.extend(node_run_info.api_calls or [])
return traces
OPENAI_AGGREGATE_METRICS = ["prompt_tokens", "completion_tokens", "total_tokens"]
def collect_metrics(self, run_infos: List[RunInfo], aggregate_metrics: List[str] = []):
if not aggregate_metrics:
return {}
total_metrics = {}
for run_info in run_infos:
if not run_info.system_metrics:
continue
for metric in aggregate_metrics:
total_metrics[metric] = total_metrics.get(metric, 0) + run_info.system_metrics.get(metric, 0)
return total_metrics
def get_run(self, run_id):
return self._node_runs.get(run_id) or self._flow_runs.get(run_id)
def persist_node_run(self, run_info: RunInfo):
self._storage.persist_node_run(run_info)
def persist_selected_node_runs(self, run_info: FlowRunInfo, node_names: List[str]):
"""
Persists the node runs for the specified node names.
:param run_info: The flow run information.
:type run_info: FlowRunInfo
:param node_names: The names of the nodes to persist.
:type node_names: List[str]
:returns: None
"""
run_id = run_info.run_id
selected_node_run_info = (
run_info for run_info in self.collect_child_node_runs(run_id) if run_info.node in node_names
)
for node_run_info in selected_node_run_info:
self.persist_node_run(node_run_info)
def persist_flow_run(self, run_info: FlowRunInfo):
self._storage.persist_flow_run(run_info)
def get_status_summary(self, run_id: str):
node_run_infos = self.collect_node_runs(run_id)
status_summary = {}
for run_info in node_run_infos:
node_name = run_info.node
if run_info.index is not None:
# Only consider Completed, Bypassed and Failed status, because the UX only support three status.
if run_info.status in (Status.Completed, Status.Bypassed, Status.Failed):
node_status_key = f"__pf__.nodes.{node_name}.{run_info.status.value.lower()}"
status_summary[node_status_key] = status_summary.setdefault(node_status_key, 0) + 1
# For reduce node, the index is None.
else:
status_summary[f"__pf__.nodes.{node_name}.completed"] = 1 if run_info.status == Status.Completed else 0
# Runtime will start root flow run with run_id == root_run_id,
# line flow run will have run id f"{root_run_id}_{line_number}"
# We filter out root flow run accordingly.
line_flow_run_infos = [
flow_run_info
for flow_run_info in self.flow_run_list
if flow_run_info.root_run_id == run_id and flow_run_info.run_id != run_id
]
total_lines = len(line_flow_run_infos)
completed_lines = len(
[flow_run_info for flow_run_info in line_flow_run_infos if flow_run_info.status == Status.Completed]
)
status_summary["__pf__.lines.completed"] = completed_lines
status_summary["__pf__.lines.failed"] = total_lines - completed_lines
return status_summary
def persist_status_summary(self, status_summary: Dict[str, int], run_id: str):
self._storage.persist_status_summary(status_summary, run_id)
| promptflow/src/promptflow/promptflow/_core/run_tracker.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/run_tracker.py",
"repo_id": "promptflow",
"token_count": 8964
} | 11 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import datetime
from enum import Enum
from typing import List, Optional, Union
from sqlalchemy import TEXT, Boolean, Column, Index
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm import declarative_base
from promptflow._sdk._constants import EXPERIMENT_CREATED_ON_INDEX_NAME, EXPERIMENT_TABLE_NAME, ListViewType
from promptflow._sdk._errors import ExperimentExistsError, ExperimentNotFoundError
from .retry import sqlite_retry
from .session import mgmt_db_session
from ...exceptions import UserErrorException, ErrorTarget
Base = declarative_base()
class Experiment(Base):
__tablename__ = EXPERIMENT_TABLE_NAME
name = Column(TEXT, primary_key=True)
created_on = Column(TEXT, nullable=False) # ISO8601("YYYY-MM-DD HH:MM:SS.SSS"), string
status = Column(TEXT, nullable=False)
description = Column(TEXT) # updated by users
properties = Column(TEXT)
archived = Column(Boolean, default=False)
nodes = Column(TEXT) # json(list of json) string
node_runs = Column(TEXT) # json(list of json) string
# NOTE: please always add columns to the tail, so that we can easily handle schema changes;
# also don't forget to update `__pf_schema_version__` when you change the schema
# NOTE: keep in mind that we need to well handle runs with legacy schema;
# normally new fields will be `None`, remember to handle them properly
last_start_time = Column(TEXT) # ISO8601("YYYY-MM-DD HH:MM:SS.SSS"), string
last_end_time = Column(TEXT) # ISO8601("YYYY-MM-DD HH:MM:SS.SSS"), string
data = Column(TEXT) # json string of data (list of dict)
inputs = Column(TEXT) # json string of inputs (list of dict)
__table_args__ = (Index(EXPERIMENT_CREATED_ON_INDEX_NAME, "created_on"),)
# schema version, increase the version number when you change the schema
__pf_schema_version__ = "1"
@sqlite_retry
def dump(self) -> None:
with mgmt_db_session() as session:
try:
session.add(self)
session.commit()
except IntegrityError as e:
# catch "sqlite3.IntegrityError: UNIQUE constraint failed: run_info.name" to raise RunExistsError
# otherwise raise the original error
if "UNIQUE constraint failed" not in str(e):
raise
raise ExperimentExistsError(f"Experiment name {self.name!r} already exists.")
except Exception as e:
raise UserErrorException(target=ErrorTarget.CONTROL_PLANE_SDK, message=str(e), error=e)
@sqlite_retry
def archive(self) -> None:
if self.archived is True:
return
self.archived = True
with mgmt_db_session() as session:
session.query(Experiment).filter(Experiment.name == self.name).update({"archived": self.archived})
session.commit()
@sqlite_retry
def restore(self) -> None:
if self.archived is False:
return
self.archived = False
with mgmt_db_session() as session:
session.query(Experiment).filter(Experiment.name == self.name).update({"archived": self.archived})
session.commit()
@sqlite_retry
def update(
self,
*,
status: Optional[str] = None,
description: Optional[str] = None,
last_start_time: Optional[Union[str, datetime.datetime]] = None,
last_end_time: Optional[Union[str, datetime.datetime]] = None,
node_runs: Optional[str] = None,
) -> None:
update_dict = {}
if status is not None:
self.status = status
update_dict["status"] = self.status
if description is not None:
self.description = description
update_dict["description"] = self.description
if last_start_time is not None:
self.last_start_time = last_start_time if isinstance(last_start_time, str) else last_start_time.isoformat()
update_dict["last_start_time"] = self.last_start_time
if last_end_time is not None:
self.last_end_time = last_end_time if isinstance(last_end_time, str) else last_end_time.isoformat()
update_dict["last_end_time"] = self.last_end_time
if node_runs is not None:
self.node_runs = node_runs
update_dict["node_runs"] = self.node_runs
with mgmt_db_session() as session:
session.query(Experiment).filter(Experiment.name == self.name).update(update_dict)
session.commit()
@staticmethod
@sqlite_retry
def get(name: str) -> "Experiment":
with mgmt_db_session() as session:
run_info = session.query(Experiment).filter(Experiment.name == name).first()
if run_info is None:
raise ExperimentNotFoundError(f"Experiment {name!r} cannot be found.")
return run_info
@staticmethod
@sqlite_retry
def list(max_results: Optional[int], list_view_type: ListViewType) -> List["Experiment"]:
with mgmt_db_session() as session:
basic_statement = session.query(Experiment)
# filter by archived
list_view_type = list_view_type.value if isinstance(list_view_type, Enum) else list_view_type
if list_view_type == ListViewType.ACTIVE_ONLY.value:
basic_statement = basic_statement.filter(Experiment.archived == False) # noqa: E712
elif list_view_type == ListViewType.ARCHIVED_ONLY.value:
basic_statement = basic_statement.filter(Experiment.archived == True) # noqa: E712
basic_statement = basic_statement.order_by(Experiment.created_on.desc())
if isinstance(max_results, int):
return [result for result in basic_statement.limit(max_results)]
else:
return [result for result in basic_statement.all()]
@staticmethod
@sqlite_retry
def delete(name: str) -> None:
with mgmt_db_session() as session:
result = session.query(Experiment).filter(Experiment.name == name).first()
if result is not None:
session.delete(result)
session.commit()
else:
raise ExperimentNotFoundError(f"Experiment {name!r} cannot be found.")
| promptflow/src/promptflow/promptflow/_sdk/_orm/experiment.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_orm/experiment.py",
"repo_id": "promptflow",
"token_count": 2618
} | 12 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import typing
from dataclasses import dataclass
from flask_restx import fields
from promptflow._constants import (
SpanAttributeFieldName,
SpanContextFieldName,
SpanFieldName,
SpanResourceAttributesFieldName,
SpanResourceFieldName,
SpanStatusFieldName,
)
from promptflow._sdk._constants import PFS_MODEL_DATETIME_FORMAT
from promptflow._sdk._service import Namespace, Resource
from promptflow._sdk._service.utils.utils import get_client_from_request
api = Namespace("spans", description="Span Management")
# parsers for query parameters
list_span_parser = api.parser()
list_span_parser.add_argument("session", type=str, required=False)
# use @dataclass for strong type
@dataclass
class ListSpanParser:
session_id: typing.Optional[str] = None
@staticmethod
def from_request() -> "ListSpanParser":
args = list_span_parser.parse_args()
return ListSpanParser(
session_id=args.session,
)
# span models, for strong type support in Swagger
context_model = api.model(
"Context",
{
SpanContextFieldName.TRACE_ID: fields.String(required=True),
SpanContextFieldName.SPAN_ID: fields.String(required=True),
SpanContextFieldName.TRACE_STATE: fields.String(required=True),
},
)
status_model = api.model(
"Status",
{
SpanStatusFieldName.STATUS_CODE: fields.String(required=True),
},
)
attributes_model = api.model(
"Attributes",
{
SpanAttributeFieldName.FRAMEWORK: fields.String(required=True, default="promptflow"),
SpanAttributeFieldName.SPAN_TYPE: fields.String(required=True, default="Function"),
SpanAttributeFieldName.FUNCTION: fields.String(required=True),
SpanAttributeFieldName.INPUTS: fields.String(required=True),
SpanAttributeFieldName.OUTPUT: fields.String(required=True),
SpanAttributeFieldName.SESSION_ID: fields.String(required=True),
SpanAttributeFieldName.PATH: fields.String,
SpanAttributeFieldName.FLOW_ID: fields.String,
SpanAttributeFieldName.RUN: fields.String,
SpanAttributeFieldName.EXPERIMENT: fields.String,
},
)
resource_attributes_model = api.model(
"ResourceAttributes",
{
SpanResourceAttributesFieldName.SERVICE_NAME: fields.String(default="promptflow"),
},
)
resource_model = api.model(
"Resource",
{
SpanResourceFieldName.ATTRIBUTES: fields.Nested(resource_attributes_model, required=True),
SpanResourceFieldName.SCHEMA_URL: fields.String,
},
)
span_model = api.model(
"Span",
{
SpanFieldName.NAME: fields.String(required=True),
SpanFieldName.CONTEXT: fields.Nested(context_model, required=True),
SpanFieldName.KIND: fields.String(required=True),
SpanFieldName.PARENT_ID: fields.String,
SpanFieldName.START_TIME: fields.DateTime(dt_format=PFS_MODEL_DATETIME_FORMAT),
SpanFieldName.END_TIME: fields.DateTime(dt_format=PFS_MODEL_DATETIME_FORMAT),
SpanFieldName.STATUS: fields.Nested(status_model),
SpanFieldName.ATTRIBUTES: fields.Nested(attributes_model, required=True),
SpanFieldName.EVENTS: fields.List(fields.String),
SpanFieldName.LINKS: fields.List(fields.String),
SpanFieldName.RESOURCE: fields.Nested(resource_model, required=True),
},
)
@api.route("/list")
class Spans(Resource):
@api.doc(description="List spans")
@api.marshal_list_with(span_model)
@api.response(code=200, description="Spans")
def get(self):
from promptflow import PFClient
client: PFClient = get_client_from_request()
args = ListSpanParser.from_request()
spans = client._traces.list_spans(
session_id=args.session_id,
)
return [span._content for span in spans]
| promptflow/src/promptflow/promptflow/_sdk/_service/apis/span.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/apis/span.py",
"repo_id": "promptflow",
"token_count": 1480
} | 13 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import flask
from jinja2 import Template
from pathlib import Path
from flask import Blueprint, request, url_for, current_app as app
def construct_staticweb_blueprint(static_folder):
"""Construct static web blueprint."""
staticweb_blueprint = Blueprint("staticweb_blueprint", __name__, static_folder=static_folder)
@staticweb_blueprint.route("/", methods=["GET", "POST"])
def home():
"""Show the home page."""
index_path = Path(static_folder) / "index.html" if static_folder else None
if index_path and index_path.exists():
template = Template(open(index_path, "r", encoding="UTF-8").read())
return flask.render_template(template, url_for=url_for)
else:
return "<h1>Welcome to promptflow app.</h1>"
@staticweb_blueprint.route("/<path:path>", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
def notfound(path):
rules = {rule.rule: rule.methods for rule in app.url_map.iter_rules()}
if path not in rules or request.method not in rules[path]:
unsupported_message = (
f"The requested api {path!r} with {request.method} is not supported by current app, "
f"if you entered the URL manually please check your spelling and try again."
)
return unsupported_message, 404
return staticweb_blueprint
| promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/static_web_blueprint.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/static_web_blueprint.py",
"repo_id": "promptflow",
"token_count": 545
} | 14 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
import time
import base64
import zlib
from flask import jsonify, request
from promptflow._sdk._serving._errors import (
JsonPayloadRequiredForMultipleInputFields,
MissingRequiredFlowInput,
NotAcceptable,
)
from promptflow._utils.exception_utils import ErrorResponse, ExceptionPresenter
from promptflow.contracts.flow import Flow as FlowContract
from promptflow.exceptions import ErrorTarget
def load_request_data(flow, raw_data, logger):
try:
data = json.loads(raw_data)
except Exception:
input = None
if flow.inputs.keys().__len__() > 1:
# this should only work if there's only 1 input field, otherwise it will fail
# TODO: add a check to make sure there's only 1 input field
message = (
"Promptflow executor received non json data, but there's more than 1 input fields, "
"please use json request data instead."
)
raise JsonPayloadRequiredForMultipleInputFields(message, target=ErrorTarget.SERVING_APP)
if isinstance(raw_data, bytes) or isinstance(raw_data, bytearray):
input = str(raw_data, "UTF-8")
elif isinstance(raw_data, str):
input = raw_data
default_key = list(flow.inputs.keys())[0]
logger.debug(f"Promptflow executor received non json data: {input}, default key: {default_key}")
data = {default_key: input}
return data
def validate_request_data(flow, data):
"""Validate required request data is provided."""
# TODO: Note that we don't have default flow input presently, all of the default is None.
required_inputs = [k for k, v in flow.inputs.items() if v.default is None]
missing_inputs = [k for k in required_inputs if k not in data]
if missing_inputs:
raise MissingRequiredFlowInput(
f"Required input fields {missing_inputs} are missing in request data {data!r}",
target=ErrorTarget.SERVING_APP,
)
def streaming_response_required():
"""Check if streaming response is required."""
return "text/event-stream" in request.accept_mimetypes.values()
def get_sample_json(project_path, logger):
# load swagger sample if exists
sample_file = os.path.join(project_path, "samples.json")
if not os.path.exists(sample_file):
return None
logger.info("Promptflow sample file detected.")
with open(sample_file, "r", encoding="UTF-8") as f:
sample = json.load(f)
return sample
# get evaluation only fields
def get_output_fields_to_remove(flow: FlowContract, logger) -> list:
"""get output fields to remove."""
included_outputs = os.getenv("PROMPTFLOW_RESPONSE_INCLUDED_FIELDS", None)
if included_outputs:
logger.info(f"Response included fields: {included_outputs}")
res = json.loads(included_outputs)
return [k for k, v in flow.outputs.items() if k not in res]
return [k for k, v in flow.outputs.items() if v.evaluation_only]
def handle_error_to_response(e, logger):
presenter = ExceptionPresenter.create(e)
logger.error(f"Promptflow serving app error: {presenter.to_dict()}")
logger.error(f"Promptflow serving error traceback: {presenter.formatted_traceback}")
resp = ErrorResponse(presenter.to_dict())
response_code = resp.response_code
# The http response code for NotAcceptable is 406.
# Currently the error framework does not allow response code overriding,
# we add a check here to override the response code.
# TODO: Consider how to embed this logic into the error framework.
if isinstance(e, NotAcceptable):
response_code = 406
return jsonify(resp.to_simplified_dict()), response_code
def get_pf_serving_env(env_key: str):
if len(env_key) == 0:
return None
value = os.getenv(env_key, None)
if value is None and env_key.startswith("PROMPTFLOW_"):
value = os.getenv(env_key.replace("PROMPTFLOW_", "PF_"), None)
return value
def get_cost_up_to_now(start_time: float):
return (time.time() - start_time) * 1000
def enable_monitoring(func):
func._enable_monitoring = True
return func
def normalize_connection_name(connection_name: str):
return connection_name.replace(" ", "_")
def decode_dict(data: str) -> dict:
# str -> bytes
data = data.encode()
zipped_conns = base64.b64decode(data)
# gzip decode
conns_data = zlib.decompress(zipped_conns, 16 + zlib.MAX_WBITS)
return json.loads(conns_data.decode())
def encode_dict(data: dict) -> str:
# json encode
data = json.dumps(data)
# gzip compress
gzip_compress = zlib.compressobj(9, zlib.DEFLATED, zlib.MAX_WBITS | 16)
zipped_data = gzip_compress.compress(data.encode()) + gzip_compress.flush()
# base64 encode
b64_data = base64.b64encode(zipped_data)
# bytes -> str
return b64_data.decode()
| promptflow/src/promptflow/promptflow/_sdk/_serving/utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/utils.py",
"repo_id": "promptflow",
"token_count": 1868
} | 15 |
import json
import os
from pathlib import Path
from PIL import Image
import streamlit as st
from streamlit_quill import st_quill
from promptflow._sdk._serving.flow_invoker import FlowInvoker
from utils import dict_iter_render_message, parse_list_from_html, parse_image_content
invoker = None
{% set indent_level = 4 %}
def start():
def clear_chat() -> None:
st.session_state.messages = []
def render_message(role, message_items):
with st.chat_message(role):
dict_iter_render_message(message_items)
def show_conversation() -> None:
if "messages" not in st.session_state:
st.session_state.messages = []
st.session_state.history = []
if st.session_state.messages:
for role, message_items in st.session_state.messages:
render_message(role, message_items)
def get_chat_history_from_session():
if "history" in st.session_state:
return st.session_state.history
return []
def submit(**kwargs) -> None:
st.session_state.messages.append(("user", kwargs))
session_state_history = dict()
session_state_history.update({"inputs": kwargs})
with container:
render_message("user", kwargs)
# Force append chat history to kwargs
{% if is_chat_flow %}
{{ ' ' * indent_level * 2 }}response = run_flow({'{{chat_history_input_name}}': get_chat_history_from_session(), **kwargs})
{% else %}
{{ ' ' * indent_level * 2 }}response = run_flow(kwargs)
{% endif %}
st.session_state.messages.append(("assistant", response))
session_state_history.update({"outputs": response})
st.session_state.history.append(session_state_history)
with container:
render_message("assistant", response)
def run_flow(data: dict) -> dict:
global invoker
if not invoker:
{% if flow_path %}
{{ ' ' * indent_level * 3 }}flow = Path('{{flow_path}}')
{{ ' ' * indent_level * 3 }}dump_path = Path('{{flow_path}}').parent
{% else %}
{{ ' ' * indent_level * 3 }}flow = Path(__file__).parent / "flow"
{{ ' ' * indent_level * 3 }}dump_path = flow.parent
{% endif %}
if flow.is_dir():
os.chdir(flow)
else:
os.chdir(flow.parent)
invoker = FlowInvoker(flow, connection_provider="local", dump_to=dump_path)
result = invoker.invoke(data)
return result
image = Image.open(Path(__file__).parent / "logo.png")
st.set_page_config(
layout="wide",
page_title="{{flow_name}} - Promptflow App",
page_icon=image,
menu_items={
'About': """
# This is a Promptflow App.
You can refer to [promptflow](https://github.com/microsoft/promptflow) for more information.
"""
}
)
# Set primary button color here since button color of the same form need to be identical in streamlit, but we only need Run/Chat button to be blue.
st.config.set_option("theme.primaryColor", "#0F6CBD")
st.title("{{flow_name}}")
st.divider()
st.chat_message("assistant").write("Hello, please input following flow inputs.")
container = st.container()
with container:
show_conversation()
with st.form(key='input_form', clear_on_submit=True):
settings_path = os.path.join(os.path.dirname(__file__), "settings.json")
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as file:
json_data = json.load(file)
environment_variables = list(json_data.keys())
for environment_variable in environment_variables:
secret_input = st.sidebar.text_input(label=environment_variable, type="password", placeholder=f"Please input {environment_variable} here. If you input before, you can leave it blank.")
if secret_input != "":
os.environ[environment_variable] = secret_input
{% for flow_input, (default_value, value_type) in flow_inputs.items() %}
{% if value_type == "list" %}
{{ ' ' * indent_level * 2 }}st.text('{{flow_input}}')
{{ ' ' * indent_level * 2 }}{{flow_input}} = st_quill(html=True, toolbar=["image"], key='{{flow_input}}', placeholder='Please enter the list values and use the image icon to upload a picture. Make sure to format each list item correctly with line breaks')
{% elif value_type == "image" %}
{{ ' ' * indent_level * 2 }}{{flow_input}} = st.file_uploader(label='{{flow_input}}')
{% elif value_type == "string" %}
{{ ' ' * indent_level * 2 }}{{flow_input}} = st.text_input(label='{{flow_input}}', placeholder='{{default_value}}')
{% else %}
{{ ' ' * indent_level * 2 }}{{flow_input}} = st.text_input(label='{{flow_input}}', placeholder={{default_value}})
{% endif %}
{% endfor %}
cols = st.columns(7)
submit_bt = cols[0].form_submit_button(label='{{label}}', type='primary')
clear_bt = cols[1].form_submit_button(label='Clear')
if submit_bt:
with st.spinner("Loading..."):
{% for flow_input, (default_value, value_type) in flow_inputs.items() %}
{% if value_type == "list" %}
{{ ' ' * indent_level * 4 }}{{flow_input}} = parse_list_from_html({{flow_input}})
{% elif value_type == "image" %}
{{ ' ' * indent_level * 4 }}{{flow_input}} = parse_image_content({{flow_input}}, {{flow_input}}.type if {{flow_input}} else None)
{% endif %}
{% endfor %}
submit({{flow_inputs_params}})
if clear_bt:
with st.spinner("Cleaning..."):
clear_chat()
st.rerun()
if __name__ == "__main__":
start()
| promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py.jinja2",
"repo_id": "promptflow",
"token_count": 2320
} | 16 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os.path
from dotenv import dotenv_values
from marshmallow import RAISE, fields, post_load, pre_load
from promptflow._sdk._utils import is_remote_uri
from promptflow._sdk.schemas._base import PatchedSchemaMeta, YamlFileSchema
from promptflow._sdk.schemas._fields import LocalPathField, NestedField, UnionField
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
def _resolve_dot_env_file(data, **kwargs):
"""Resolve .env file to environment variables."""
env_var = data.get("environment_variables", None)
try:
if env_var and os.path.exists(env_var):
env_dict = dotenv_values(env_var)
data["environment_variables"] = env_dict
except TypeError:
pass
return data
class ResourcesSchema(metaclass=PatchedSchemaMeta):
"""Schema for resources."""
instance_type = fields.Str()
# compute instance name for session usage
compute = fields.Str()
class RemotePathStr(fields.Str):
default_error_messages = {
"invalid_path": "Invalid remote path. "
"Currently only azureml://xxx or public URL(e.g. https://xxx) are supported.",
}
def _validate(self, value):
# inherited validations like required, allow_none, etc.
super(RemotePathStr, self)._validate(value)
if value is None:
return
if not is_remote_uri(value):
raise self.make_error(
"invalid_path",
)
class RemoteFlowStr(fields.Str):
default_error_messages = {
"invalid_path": "Invalid remote flow path. Currently only azureml:<flow-name> is supported",
}
def _validate(self, value):
# inherited validations like required, allow_none, etc.
super(RemoteFlowStr, self)._validate(value)
if value is None:
return
if not isinstance(value, str) or not value.startswith("azureml:"):
raise self.make_error(
"invalid_path",
)
class RunSchema(YamlFileSchema):
"""Base schema for all run schemas."""
# TODO(2898455): support directly write path/flow + entry in run.yaml
# region: common fields
name = fields.Str()
display_name = fields.Str(required=False)
tags = fields.Dict(keys=fields.Str(), values=fields.Str(allow_none=True))
status = fields.Str(dump_only=True)
description = fields.Str(attribute="description")
properties = fields.Dict(keys=fields.Str(), values=fields.Str(allow_none=True))
# endregion: common fields
flow = UnionField([LocalPathField(required=True), RemoteFlowStr(required=True)])
# inputs field
data = UnionField([LocalPathField(), RemotePathStr()])
column_mapping = fields.Dict(keys=fields.Str)
# runtime field, only available for cloud run
runtime = fields.Str()
# raise unknown exception for unknown fields in resources
resources = NestedField(ResourcesSchema, unknown=RAISE)
run = fields.Str()
# region: context
variant = fields.Str()
environment_variables = UnionField(
[
fields.Dict(keys=fields.Str(), values=fields.Str()),
# support load environment variables from .env file
LocalPathField(),
]
)
connections = fields.Dict(keys=fields.Str(), values=fields.Dict(keys=fields.Str()))
# endregion: context
# region: command node
command = fields.Str(dump_only=True)
outputs = fields.Dict(key=fields.Str(), dump_only=True)
# endregion: command node
@post_load
def resolve_dot_env_file(self, data, **kwargs):
return _resolve_dot_env_file(data, **kwargs)
@pre_load
def warning_unknown_fields(self, data, **kwargs):
# log warnings for unknown schema fields
unknown_fields = set(data) - set(self.fields)
if unknown_fields:
logger.warning("Run schema validation warnings. Unknown fields found: %s", unknown_fields)
return data
| promptflow/src/promptflow/promptflow/_sdk/schemas/_run.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/schemas/_run.py",
"repo_id": "promptflow",
"token_count": 1545
} | 17 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# This file is for open source,
# so it should not contain any dependency on azure or azureml-related packages.
import json
import logging
import os
import sys
from contextvars import ContextVar
from dataclasses import dataclass
from functools import partial
from typing import List, Optional
from promptflow._constants import PF_LOGGING_LEVEL
from promptflow._utils.credential_scrubber import CredentialScrubber
from promptflow._utils.exception_utils import ExceptionPresenter
from promptflow.contracts.run_mode import RunMode
# The maximum length of logger name is 18 ("promptflow-runtime").
# The maximum digit length of process id is 5. Fix the field width to 7.
# So fix the length of these fields in the formatter.
# May need to change if logger name/process id length changes.
LOG_FORMAT = "%(asctime)s %(process)7d %(name)-18s %(levelname)-8s %(message)s"
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S %z"
class CredentialScrubberFormatter(logging.Formatter):
"""Formatter that scrubs credentials in logs."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._default_scrubber = CredentialScrubber()
self._context_var = ContextVar("credential_scrubber", default=None)
@property
def credential_scrubber(self):
credential_scrubber = self._context_var.get()
if credential_scrubber:
return credential_scrubber
return self._default_scrubber
def set_credential_list(self, credential_list: List[str]):
"""Set credential list, which will be scrubbed in logs."""
credential_scrubber = CredentialScrubber()
for c in credential_list:
credential_scrubber.add_str(c)
self._context_var.set(credential_scrubber)
def clear(self):
"""Clear context variable."""
self._context_var.set(None)
def format(self, record):
"""Override logging.Formatter's format method and remove credentials from log."""
s: str = super().format(record)
s = self._handle_traceback(s, record)
s = self._handle_customer_content(s, record)
return self.credential_scrubber.scrub(s)
def _handle_customer_content(self, s: str, record: logging.LogRecord) -> str:
"""Handle customer content in log message.
Derived class can override this method to handle customer content in log.
"""
# If log record does not have "customer_content" field, return input logging string directly.
if not hasattr(record, "customer_content"):
return s
customer_content = record.customer_content
if isinstance(customer_content, Exception):
# If customer_content is an exception, convert it to string.
customer_str = self._convert_exception_to_str(customer_content)
elif isinstance(customer_content, str):
customer_str = customer_content
else:
customer_str = str(customer_content)
return s.replace("{customer_content}", customer_str)
def _handle_traceback(self, s: str, record: logging.LogRecord) -> str:
"""Interface method for handling traceback in log message.
Derived class can override this method to handle traceback in log.
"""
return s
def _convert_exception_to_str(self, ex: Exception) -> str:
"""Convert exception a user-friendly string."""
try:
return json.dumps(ExceptionPresenter.create(ex).to_dict(include_debug_info=True), indent=2)
except: # noqa: E722
return str(ex)
class FileHandler:
"""Write compliant log to a file."""
def __init__(self, file_path: str, formatter: Optional[logging.Formatter] = None):
self._stream_handler = self._get_stream_handler(file_path)
if formatter is None:
# Default formatter to scrub credentials in log message, exception and stack trace.
self._formatter = CredentialScrubberFormatter(fmt=LOG_FORMAT, datefmt=DATETIME_FORMAT)
else:
self._formatter = formatter
self._stream_handler.setFormatter(self._formatter)
def set_credential_list(self, credential_list: List[str]):
"""Set credential list, which will be scrubbed in logs."""
self._formatter.set_credential_list(credential_list)
def emit(self, record: logging.LogRecord):
"""Write logs."""
self._stream_handler.emit(record)
def close(self):
"""Close stream handler."""
self._stream_handler.close()
self._formatter.clear()
def _get_stream_handler(self, file_path) -> logging.StreamHandler:
"""This method can be overridden by derived class to save log file in cloud."""
return logging.FileHandler(file_path, encoding="UTF-8")
class FileHandlerConcurrentWrapper(logging.Handler):
"""Wrap context-local FileHandler instance for thread safety.
A logger instance can write different log to different files in different contexts.
"""
def __init__(self):
super().__init__()
self._context_var = ContextVar("handler", default=None)
@property
def handler(self) -> FileHandler:
return self._context_var.get()
@handler.setter
def handler(self, handler: FileHandler):
self._context_var.set(handler)
def emit(self, record: logging.LogRecord):
"""Override logging.Handler's emit method.
Get inner file handler in current context and write log.
"""
stream_handler: FileHandler = self._context_var.get()
if stream_handler is None:
return
stream_handler.emit(record)
def clear(self):
"""Close file handler and clear context variable."""
handler: FileHandler = self._context_var.get()
if handler:
try:
handler.close()
except: # NOQA: E722
# Do nothing if handler close failed.
pass
self._context_var.set(None)
valid_logging_level = {"CRITICAL", "FATAL", "ERROR", "WARN", "WARNING", "INFO", "DEBUG", "NOTSET"}
def get_pf_logging_level(default=logging.INFO):
logging_level = os.environ.get(PF_LOGGING_LEVEL, None)
if logging_level not in valid_logging_level:
# Fall back to info if user input is invalid.
logging_level = default
return logging_level
def get_logger(name: str) -> logging.Logger:
"""Get logger used during execution."""
logger = logging.Logger(name)
logger.setLevel(get_pf_logging_level())
logger.addHandler(FileHandlerConcurrentWrapper())
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(CredentialScrubberFormatter(fmt=LOG_FORMAT, datefmt=DATETIME_FORMAT))
logger.addHandler(stdout_handler)
return logger
# Logs by flow_logger will only be shown in flow mode.
# These logs should contain all detailed logs from executor and runtime.
flow_logger = get_logger("execution.flow")
# Logs by bulk_logger will only be shown in bulktest and eval modes.
# These logs should contain overall progress logs and error logs.
bulk_logger = get_logger("execution.bulk")
# Logs by logger will be shown in all the modes above,
# such as error logs.
logger = get_logger("execution")
logger_contexts = []
@dataclass
class LogContext:
"""A context manager to setup logger context for input_logger, logger, flow_logger and bulk_logger."""
file_path: str # Log file path.
run_mode: Optional[RunMode] = RunMode.Test
credential_list: Optional[List[str]] = None # These credentials will be scrubbed in logs.
input_logger: logging.Logger = None # If set, then context will also be set for input_logger.
def get_initializer(self):
return partial(
LogContext, file_path=self.file_path, run_mode=self.run_mode, credential_list=self.credential_list
)
@staticmethod
def get_current() -> Optional["LogContext"]:
global logger_contexts
if logger_contexts:
return logger_contexts[-1]
return None
@staticmethod
def set_current(context: "LogContext"):
global logger_contexts
if isinstance(context, LogContext):
logger_contexts.append(context)
@staticmethod
def clear_current():
global logger_contexts
if logger_contexts:
logger_contexts.pop()
def __enter__(self):
self._set_log_path()
self._set_credential_list()
LogContext.set_current(self)
def __exit__(self, *args):
"""Clear context-local variables."""
all_logger_list = [logger, flow_logger, bulk_logger]
if self.input_logger:
all_logger_list.append(self.input_logger)
for logger_ in all_logger_list:
for handler in logger_.handlers:
if isinstance(handler, FileHandlerConcurrentWrapper):
handler.clear()
elif isinstance(handler.formatter, CredentialScrubberFormatter):
handler.formatter.clear()
LogContext.clear_current()
def _set_log_path(self):
if not self.file_path:
return
logger_list = self._get_loggers_to_set_path()
for logger_ in logger_list:
for log_handler in logger_.handlers:
if isinstance(log_handler, FileHandlerConcurrentWrapper):
handler = FileHandler(self.file_path)
log_handler.handler = handler
def _set_credential_list(self):
# Set credential list to all loggers.
all_logger_list = self._get_execute_loggers_list()
if self.input_logger:
all_logger_list.append(self.input_logger)
credential_list = self.credential_list or []
for logger_ in all_logger_list:
for handler in logger_.handlers:
if isinstance(handler, FileHandlerConcurrentWrapper) and handler.handler:
handler.handler.set_credential_list(credential_list)
elif isinstance(handler.formatter, CredentialScrubberFormatter):
handler.formatter.set_credential_list(credential_list)
def _get_loggers_to_set_path(self) -> List[logging.Logger]:
logger_list = [logger]
if self.input_logger:
logger_list.append(self.input_logger)
# For Batch run mode, set log path for bulk_logger,
# otherwise for flow_logger.
if self.run_mode == RunMode.Batch:
logger_list.append(bulk_logger)
else:
logger_list.append(flow_logger)
return logger_list
@classmethod
def _get_execute_loggers_list(cls) -> List[logging.Logger]:
# return all loggers for executor
return [logger, flow_logger, bulk_logger]
def update_log_path(log_path: str, input_logger: logging.Logger = None):
logger_list = [logger, bulk_logger, flow_logger]
if input_logger:
logger_list.append(input_logger)
for logger_ in logger_list:
update_single_log_path(log_path, logger_)
def update_single_log_path(log_path: str, logger_: logging.Logger):
for wrapper in logger_.handlers:
if isinstance(wrapper, FileHandlerConcurrentWrapper):
handler: FileHandler = wrapper.handler
if handler:
wrapper.handler = type(handler)(log_path, handler._formatter)
def scrub_credentials(s: str):
"""Scrub credentials in string s.
For example, for input string: "print accountkey=accountKey", the output will be:
"print accountkey=**data_scrubbed**"
"""
for h in logger.handlers:
if isinstance(h, FileHandlerConcurrentWrapper):
if h.handler and h.handler._formatter:
credential_scrubber = h.handler._formatter.credential_scrubber
if credential_scrubber:
return credential_scrubber.scrub(s)
return CredentialScrubber().scrub(s)
class LoggerFactory:
@staticmethod
def get_logger(name: str, verbosity: int = logging.INFO, target_stdout: bool = False):
logger = logging.getLogger(name)
logger.propagate = False
# Set default logger level to debug, we are using handler level to control log by default
logger.setLevel(logging.DEBUG)
# Use env var at first, then use verbosity
verbosity = get_pf_logging_level(default=None) or verbosity
if not LoggerFactory._find_handler(logger, logging.StreamHandler):
LoggerFactory._add_handler(logger, verbosity, target_stdout)
# TODO: Find a more elegant way to set the logging level for azure.core.pipeline.policies._universal
azure_logger = logging.getLogger("azure.core.pipeline.policies._universal")
azure_logger.setLevel(logging.DEBUG)
LoggerFactory._add_handler(azure_logger, logging.DEBUG, target_stdout)
return logger
@staticmethod
def _find_handler(logger: logging.Logger, handler_type: type) -> Optional[logging.Handler]:
for log_handler in logger.handlers:
if isinstance(log_handler, handler_type):
return log_handler
return None
@staticmethod
def _add_handler(logger: logging.Logger, verbosity: int, target_stdout: bool = False) -> None:
# set target_stdout=True can log data into sys.stdout instead of default sys.stderr, in this way
# logger info and python print result can be synchronized
handler = logging.StreamHandler(stream=sys.stdout) if target_stdout else logging.StreamHandler()
formatter = logging.Formatter("[%(asctime)s][%(name)s][%(levelname)s] - %(message)s")
handler.setFormatter(formatter)
handler.setLevel(verbosity)
logger.addHandler(handler)
def get_cli_sdk_logger():
"""Get logger used by CLI SDK."""
# cli sdk logger default logging level is WARNING
# here the logger name "promptflow" is from promptflow._sdk._constants.LOGGER_NAME,
# to avoid circular import error, use plain string here instead of importing from _constants
# because this function is also called in _prepare_home_dir which is in _constants
return LoggerFactory.get_logger("promptflow", verbosity=logging.WARNING)
| promptflow/src/promptflow/promptflow/_utils/logger_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/logger_utils.py",
"repo_id": "promptflow",
"token_count": 5614
} | 18 |