text
stringlengths 8
1.72M
| id
stringlengths 22
143
| metadata
dict | __index_level_0__
int64 0
104
|
---|---|---|---|
# Change log of default runtime image
In Azure Machine Learning prompt flow, the execution of flows is facilitated by using runtimes. Within the Azure Machine Learning workspace, a runtime serves as computing resource that enable customers to execute flows.
A runtime includes a pre-built Docker image (users can also provide their own custom image), which contains all necessary dependency packages.
This Docker image is continuously updated, and here we record the new features and fixed bugs of each image version. The image can be pulled by specifying a runtime version and execute the following command:
```
docker pull mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<runtime_version>
```
You can check the runtime image version from the flow execution log:
![img](../../media/cloud/runtime-change-log/runtime-version.png)
## 20240116.v1
### New features
NA
### Bugs fixed
- Add validation for wrong connection type for LLM tool.
## 20240111.v2
### New features
- Support error log scrubbing for heron jobs.
### Bugs fixed
- Fixed the compatibility issue between runtime and promptflow package < 1.3.0
| promptflow/docs/cloud/azureai/runtime-change-log.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/runtime-change-log.md",
"repo_id": "promptflow",
"token_count": 276
} | 0 |
# Use streaming endpoints deployed from prompt flow
In prompt flow, you can [deploy flow as REST endpoint](./deploy-a-flow/index.md) for real-time inference.
When consuming the endpoint by sending a request, the default behavior is that the online endpoint will keep waiting until the whole response is ready, and then send it back to the client. This can cause a long delay for the client and a poor user experience.
To avoid this, you can use streaming when you consume the endpoints. Once streaming enabled, you don't have to wait for the whole response ready. Instead, the server will send back the response in chunks as they are generated. The client can then display the response progressively, with less waiting time and more interactivity.
This article will describe the scope of streaming, how streaming works, and how to consume streaming endpoints.
## Create a streaming enabled flow
If you want to use the streaming mode, you need to create a flow that has a node that produces a string generator as the flow’s output. A string generator is an object that can return one string at a time when requested. You can use the following types of nodes to create a string generator:
- LLM node: This node uses a large language model to generate natural language responses based on the input.
```jinja
{# Sample prompt template for LLM node #}
system:
You are a helpful assistant.
user:
{{question}}
```
- Python tools node: This node allows you to write custom Python code that can yield string outputs. You can use this node to call external APIs or libraries that support streaming. For example, you can use this code to echo the input word by word:
```python
from promptflow import tool
# Sample code echo input by yield in Python tool node
@tool
def my_python_tool(paragraph: str) -> str:
yield "Echo: "
for word in paragraph.split():
yield word + " "
```
In this guide, we will use the ["Chat with Wikipedia"](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-wikipedia) sample flow as an example. This flow processes the user’s question, searches Wikipedia for relevant articles, and answers the question with information from the articles. It uses streaming mode to show the progress of the answer generation.
![chat_wikipedia.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_center.png)
## Deploy the flow as an online endpoint
To use the streaming mode, you need to deploy your flow as an online endpoint. This will allow you to send requests and receive responses from your flow in real time.
Follow [this guide](./deploy-a-flow/index.md) to deploy your flow as an online endpoint.
> [!NOTE]
>
> You can follow this document to deploy an [online endpoint](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference?view=azureml-api-2).
> Please deploy with runtime environment version later than version `20230816.v10`.
> You can check your runtime version and update runtime in the run time detail page.
## Understand the streaming process
When you have an online endpoint, the client and the server need to follow specific principles for [content negotiation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation) to utilize the streaming mode:
Content negotiation is like a conversation between the client and the server about the preferred format of the data they want to send and receive. It ensures effective communication and agreement on the format of the exchanged data.
To understand the streaming process, consider the following steps:
- First, the client constructs an HTTP request with the desired media type included in the `Accept` header. The media type tells the server what kind of data format the client expects. It's like the client saying, "Hey, I'm looking for a specific format for the data you'll send me. It could be JSON, text, or something else." For example, `application/json` indicates a preference for JSON data, `text/event-stream` indicates a desire for streaming data, and `*/*` means the client accepts any data format.
> [!NOTE]
>
> If a request lacks an `Accept` header or has empty `Accept` header, it implies that the client will accept any media type in response. The server treats it as `*/*`.
- Next, the server responds based on the media type specified in the `Accept` header. It's important to note that the client may request multiple media types in the `Accept` header, and the server must consider its capabilities and format priorities to determine the appropriate response.
- First, the server checks if `text/event-stream` is explicitly specified in the `Accept` header:
- For a stream-enabled flow, the server returns a response with a `Content-Type` of `text/event-stream`, indicating that the data is being streamed.
- For a non-stream-enabled flow, the server proceeds to check for other media types specified in the header.
- If `text/event-stream` is not specified, the server then checks if `application/json` or `*/*` is specified in the `Accept` header:
- In such cases, the server returns a response with a `Content-Type` of `application/json`, providing the data in JSON format.
- If the `Accept` header specifies other media types, such as `text/html`:
- The server returns a `424` response with a PromptFlow runtime error code `UserError` and a runtime HTTP status `406`, indicating that the server cannot fulfill the request with the requested data format.
> Note: Please refer [handle errors](#handle-errors) for details.
- Finally, the client checks the `Content-Type` response header. If it is set to `text/event-stream`, it indicates that the data is being streamed.
Let’s take a closer look at how the streaming process works. The response data in streaming mode follows the format of [server-sent events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events).
The overall process works as follows:
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Hello",
"chat_history": []
}
```
> [!NOTE]
>
> The `Accept` header is set to `text/event-stream` to request a stream response.
### 1. The server sends back the response in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Hello"}
data: {"answer": "!"}
data: {"answer": " How"}
data: {"answer": " can"}
data: {"answer": " I"}
data: {"answer": " assist"}
data: {"answer": " you"}
data: {"answer": " today"}
data: {"answer": " ?"}
data: {"answer": ""}
```
Note that the `Content-Type` is set to `text/event-stream; charset=utf-8`, indicating the response is an event stream.
The client should decode the response data as server-sent events and display them incrementally. The server will close the HTTP connection after all the data is sent.
Each response event is the delta to the previous event. It is recommended for the client to keep track of the merged data in memory and send them back to the server as chat history in the next request.
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Glad to know you!",
"chat_history": [
{
"inputs": {
"question": "Hello"
},
"outputs": {
"answer": "Hello! How can I assist you today?"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Nice"}
data: {"answer": " to"}
data: {"answer": " know"}
data: {"answer": " you"}
data: {"answer": " too"}
data: {"answer": "!"}
data: {"answer": " Is"}
data: {"answer": " there"}
data: {"answer": " anything"}
data: {"answer": " I"}
data: {"answer": " can"}
data: {"answer": " help"}
data: {"answer": " you"}
data: {"answer": " with"}
data: {"answer": "?"}
data: {"answer": ""}
```
### 4. The chat continues in a similar way.
## Handle errors
The client should check the HTTP response code first. See [this table](https://learn.microsoft.com/azure/machine-learning/how-to-troubleshoot-online-endpoints?view=azureml-api-2&tabs=cli#http-status-codes) for common error codes returned by online endpoints.
If the response code is "424 Model Error", it means that the error is caused by the model’s code. The error response from a PromptFlow model always follows this format:
```json
{
"error": {
"code": "UserError",
"message": "Media type text/event-stream in Accept header is not acceptable. Supported media type(s) - application/json",
}
}
```
* It is always a JSON dictionary with only one key "error" defined.
* The value for "error" is a dictionary, containing "code", "message".
* "code" defines the error category. Currently, it may be "UserError" for bad user inputs and "SystemError" for errors inside the service.
* "message" is a description of the error. It can be displayed to the end user.
## How to consume the server-sent events
### Consume using Python
In this sample usage, we are using the `SSEClient` class. This class is not a built-in Python class and needs to be installed separately. You can install it via pip:
```bash
pip install sseclient-py
```
A sample usage would like:
```python
import requests
from sseclient import SSEClient
from requests.exceptions import HTTPError
try:
response = requests.post(url, json=body, headers=headers, stream=stream)
response.raise_for_status()
content_type = response.headers.get('Content-Type')
if "text/event-stream" in content_type:
client = SSEClient(response)
for event in client.events():
# Handle event, i.e. print to stdout
else:
# Handle json response
except HTTPError:
# Handle exceptions
```
### Consume using JavaScript
There are several libraries to consume server-sent events in JavaScript. Here is [one of them as an example](https://www.npmjs.com/package/sse.js?activeTab=code).
## A sample chat app using Python
Here is a sample chat app written in Python.
(Click [here](../media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py) to view the source code.)
![chat_app](../media/how-to-guides/how-to-enable-streaming-mode/chat_app.gif)
## Advance usage - hybrid stream and non-stream flow output
Sometimes, you may want to get both stream and non-stream results from a flow output. For example, in the “Chat with Wikipedia” flow, you may want to get not only LLM’s answer, but also the list of URLs that the flow searched. To do this, you need to modify the flow to output a combination of stream LLM’s answer and non-stream URL list.
In the sample "Chat With Wikipedia" flow, the output is connected to the LLM node `augmented_chat`. To add the URL list to the output, you need to add an output field with the name `url` and the value `${get_wiki_url.output}`.
![chat_wikipedia_dual_output_center.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_dual_output_center.png)
The output of the flow will be a non-stream field as the base and a stream field as the delta. Here is an example of request and response.
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When was ChatGPT launched?",
"chat_history": []
}
```
### 1. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=ChatGPT", "https://en.wikipedia.org/w/index.php?search=GPT-4"]}
data: {"answer": ""}
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " \n\n"}
...
data: {"answer": "PT"}
data: {"answer": ""}
```
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When did OpenAI announce GPT-4? How long is it between these two milestones?",
"chat_history": [
{
"inputs": {
"question": "When was ChatGPT launched?"
},
"outputs": {
"url": [
"https://en.wikipedia.org/w/index.php?search=ChatGPT",
"https://en.wikipedia.org/w/index.php?search=GPT-4"
],
"answer": "ChatGPT was launched on November 30, 2022. \n\nSOURCES: https://en.wikipedia.org/w/index.php?search=ChatGPT"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=Generative pre-trained transformer ", "https://en.wikipedia.org/w/index.php?search=Microsoft "]}
data: {"answer": ""}
data: {"answer": "Open"}
data: {"answer": "AI"}
data: {"answer": " released"}
data: {"answer": " G"}
data: {"answer": "PT"}
data: {"answer": "-"}
data: {"answer": "4"}
data: {"answer": " in"}
data: {"answer": " March"}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "3"}
data: {"answer": "."}
data: {"answer": " Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " The"}
data: {"answer": " time"}
data: {"answer": " between"}
data: {"answer": " these"}
data: {"answer": " two"}
data: {"answer": " milestones"}
data: {"answer": " is"}
data: {"answer": " approximately"}
data: {"answer": " "}
data: {"answer": "3"}
data: {"answer": " months"}
data: {"answer": ".\n\n"}
...
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": ""}
```
| promptflow/docs/how-to-guides/enable-streaming-mode.md/0 | {
"file_path": "promptflow/docs/how-to-guides/enable-streaming-mode.md",
"repo_id": "promptflow",
"token_count": 4715
} | 1 |
# Azure AI Language
Azure AI Language enables users with task-oriented and optimized pre-trained language models to effectively understand documents and conversations. This Prompt flow tool is a wrapper for various Azure AI Language APIs. The current list of supported capabilities is as follows:
| Name | Description |
|-------------------------------------------|-------------------------------------------------------|
| Abstractive Summarization | Generate abstractive summaries from documents. |
| Extractive Summarization | Extract summaries from documents. |
| Conversation Summarization | Summarize conversations. |
| Entity Recognition | Recognize and categorize entities in documents. |
| Key Phrase Extraction | Extract key phrases from documents. |
| Language Detection | Detect the language of documents. |
| PII Entity Recognition | Recognize and redact PII entities in documents. |
| Sentiment Analysis | Analyze the sentiment of documents. |
| Conversational Language Understanding | Predict intents and entities from user's utterances. |
| Translator | Translate documents. |
## Requirements
- For AzureML users:
follow this [wiki](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage?view=azureml-api-2#prepare-runtime), starting from `Prepare runtime`. Note that the PyPi package name is `promptflow-azure-ai-language`.
- For local users:
```
pip install promptflow-azure-ai-language
```
## Prerequisites
The tool calls APIs from Azure AI Language. To use it, you must create a connection to an [Azure AI Language resource](https://learn.microsoft.com/en-us/azure/ai-services/language-service/). Create a Language resource first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Language Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Language Resource endpoint>`
To use the `Translator` tool, you must set up an additional connection to an [Azure AI Translator resource](https://azure.microsoft.com/en-us/products/ai-services/ai-translator). [Create a Translator resource](https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource) first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Translator Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Translator Resource endpoint>`
- If your Translator Resource is regional and non-global, specify its region under `configs` as well: `region: <Azure AI Translator Resource region>`
## Inputs
The tool accepts the following inputs:
- **Abstractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| summary_length | string (enum) | The desired summary length. Enum values are `short`, `medium`, and `long`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Extractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| sentence_count | int | The desired number of output summary sentences. Default value is `3`. | No |
| sort_by | string (enum) | The sorting criteria for extractive summarization results. Enum values are `Offset` to sort results in order of appearance in the text and `Rank` to sort results in order of importance (i.e. rank score) according to model. Default value is `Offset`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversation Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. Text should be of the following form: `<speaker id>: <speaker text> \n <speaker id>: <speaker text> \n ...` | Yes |
| modality | string (enum) | The modality of the input text. Enum values are `text` for input from a text source, and `transcript` for input from a transcript source. | Yes |
| summary_aspect | string (enum) | The desired summary "aspect" to obtain. Enum values are `chapterTitle` to obtain the chapter title of any conversation, `issue` to obtain the summary of issues in transcripts of web chats and service calls between customer-service agents and customers, `narrative` to obtain the generic summary of any conversation, `resolution` to obtain the summary of resolutions in transcripts of web chats and service calls between customer-service agents and customers, `recap` to obtain a general summary, and `follow-up tasks` to obtain a summary of follow-up or action items. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Key Phrase Extraction**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Language Detection**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **PII Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| domain | string (enum) | The PII domain used for PII Entity Recognition. Enum values are `none` for no domain, or `phi` to indicate that entities in the Personal Health domain should be redacted. Default value is `none`. | No |
| categories | list[string] | Describes the PII categories to return. Default value is `[]`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Sentiment Analysis**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| opinion_mining | bool | Should opinion mining be enabled. Default value is `False`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversational Language Understanding**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| utterances | string | A single user utterance or a json array of user utterances. | Yes |
| project_name | string | The Conversational Language Understanding project to be called. | Yes |
| deployment_name | string | The Conversational Language Understanding project deployment to be called. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Translator**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Translator resource. | Yes |
| text | string | The input text. | Yes |
| to | list[string] | The languages to translate the input text to. | Yes |
| source_language | string | The language of the input text. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
## Outputs
If the input parameter `parse_response` is set to `False` (default value), the raw API json output will be returned as a string. Refer to the [REST API reference](https://learn.microsoft.com/en-us/rest/api/language/) for details on API output. For Conversational Language Understanding, the output will be a list of raw API json responses, one response for each user utterance in the input.
When `parse_response` is set to `True`, the tool will parse API output as follows:
| Name | Type | Description |
|-------------------------------------------------------------|--------|---------------------|
| Abstractive Summarization | string | Abstractive summary. |
| Extractive Summarization | list[string] | Extracted summary sentence strings. |
| Conversation Summarization | string | Conversation summary based on `summary_aspect`. |
| Entity Recognition | dict[string, string] | Recognized entities, where keys are entity names and values are entity categories. |
| Key Phrase Extraction | list[string] | Extracted key phrases as strings. |
| Language Detection | string | Detected language's ISO 639-1 code. |
| PII Entity Recognition | string | Input `text` with PII entities redacted. |
| Sentiment Analysis | string | Analyzed sentiment: `positive`, `neutral`, or `negative`. |
| Conversational Language Understanding | list[dict[string, string]] | List of user utterances and associated intents. |
| Translator | dict[string, string] | Translated text, where keys are the translated languages and values are the translated texts. |
| promptflow/docs/integrations/tools/azure-ai-language-tool.md/0 | {
"file_path": "promptflow/docs/integrations/tools/azure-ai-language-tool.md",
"repo_id": "promptflow",
"token_count": 4513
} | 2 |
# SerpAPI
## Introduction
The SerpAPI API is a Python tool that provides a wrapper to the [SerpAPI Google Search Engine Results API](https://serpapi.com/search-api) and [SerpApi Bing Search Engine Results API
](https://serpapi.com/bing-search-api).
We could use the tool to retrieve search results from a number of different search engines, including Google and Bing, and you can specify a range of search parameters, such as the search query, location, device type, and more.
## Prerequisite
Sign up at [SERP API homepage](https://serpapi.com/)
## Connection
Connection is the model used to establish connections with Serp API.
| Type | Name | API KEY |
|-------------|----------|----------|
| Serp | Required | Required |
_**API Key** is on SerpAPI account dashboard_
## Inputs
The **serp api** tool supports following parameters:
| Name | Type | Description | Required |
|----------|---------|---------------------------------------------------------------|----------|
| query | string | The search query to be executed. | Yes |
| engine | string | The search engine to use for the search. Default is 'google'. | Yes |
| num | integer | The number of search results to return.Default is 10. | No |
| location | string | The geographic location to execute the search from. | No |
| safe | string | The safe search mode to use for the search. Default is 'off'. | No |
## Outputs
The json representation from serpapi query.
| Engine | Return Type | Output |
|----------|-------------|-------------------------------------------------------|
| google | json | [Sample](https://serpapi.com/search-api#api-examples) |
| bing | json | [Sample](https://serpapi.com/bing-search-api) |
| promptflow/docs/reference/tools-reference/serp-api-tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/serp-api-tool.md",
"repo_id": "promptflow",
"token_count": 683
} | 3 |
# Chat With Image
This flow demonstrates how to create a chatbot that can take image and text as input.
Tools used in this flow:
- `OpenAI GPT-4V` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow with image and text as input. The chat input should be a list of text and/or images.
## Getting started
### 1 Create connection for OpenAI GPT-4V tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, and create an "OpenAI" connection. If you do not have an OpenAI account, please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> name=aoai_gpt4v_connection api_version=2023-07-01-preview
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_gpt4v_connection`.
```bash
# show registered connection
pf connection show --name aoai_gpt4v_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question='["How many colors can you see?", {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}]'
```
```sh
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| promptflow/examples/flows/chat/chat-with-image/README.md/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-image/README.md",
"repo_id": "promptflow",
"token_count": 526
} | 4 |
import requests
import os
import re
from utils.lock import acquire_lock
from utils.logging import log
from constants import PDF_DIR
# Download a pdf file from a url and return the path to the file
def download(url: str) -> str:
path = os.path.join(PDF_DIR, normalize_filename(url) + ".pdf")
lock_path = path + ".lock"
with acquire_lock(lock_path):
if os.path.exists(path):
log("Pdf already exists in " + os.path.abspath(path))
return path
log("Downloading pdf from " + url)
response = requests.get(url)
with open(path, "wb") as f:
f.write(response.content)
return path
def normalize_filename(filename):
# Replace any invalid characters with an underscore
return re.sub(r"[^\w\-_. ]", "_", filename)
| promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py",
"repo_id": "promptflow",
"token_count": 307
} | 5 |
import os
import unittest
import promptflow
from base_test import BaseTest
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDF(BaseTest):
def setUp(self):
super().setUp()
self.pf = promptflow.PFClient()
def tearDown(self) -> None:
return super().tearDown()
def test_run_chat_with_pdf(self):
result = self.pf.test(
flow=self.flow_path,
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "BERT stands for?",
"config": self.config_2k_context,
},
)
print(result)
self.assertTrue(
result["answer"].find(
"Bidirectional Encoder Representations from Transformers"
)
!= -1
)
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run()
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
}
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_bulk_run_mapping_missing_one_column(self):
data_path = os.path.join(
self.flow_path, "data/invalid-data-missing-column.jsonl"
)
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question}",
},
data=data_path
)
def test_bulk_run_invalid_mapping(self):
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
}
)
if __name__ == "__main__":
unittest.main()
| promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py",
"repo_id": "promptflow",
"token_count": 1637
} | 6 |
# Classification Accuracy Evaluation
This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a "Correct" or "Incorrect" grade, and aggregating the results to produce metrics such as accuracy, which reflects how good the system is at classifying the data.
Tools used in this flow:
- `python` tool
## What you will learn
In this flow, you will learn
- how to compose a point based evaluation flow, where you can calculate point-wise metrics.
- the way to log metrics. use `from promptflow import log_metric`
- see file [calculate_accuracy.py](calculate_accuracy.py)
### 0. Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
### 1. Test flow/node
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs groundtruth=APP prediction=APP
# test node with inputs
pf flow test --flow . --node grade --inputs groundtruth=groundtruth prediction=prediction
```
### 2. create flow run with multi line data
There are two ways to evaluate an classification flow.
```bash
pf run create --flow . --data ./data.jsonl --column-mapping groundtruth='${data.groundtruth}' prediction='${data.prediction}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
### 3. create run against other flow run
Learn more in [web-classification](../../standard/web-classification/README.md)
| promptflow/examples/flows/evaluation/eval-classification-accuracy/README.md/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-classification-accuracy/README.md",
"repo_id": "promptflow",
"token_count": 572
} | 7 |
from promptflow import tool
import re
@tool
def parse_generation_output(rag_generation_score: str) -> str:
quality_score = float('nan')
quality_reasoning = ''
for sent in rag_generation_score.split('\n'):
sent = sent.strip()
if re.match(r"\s*(<)?Quality score:", sent):
numbers_found = re.findall(r"(\d+\.*\d*)\/", sent)
if len(numbers_found) == 0:
continue
quality_score = int(
float(numbers_found[0].replace("'", "")))
for sent in rag_generation_score.split('\n'):
sent = sent.strip()
if re.match(r"\s*(<)?Quality score reasoning:", sent):
quality_reasoning += sent.strip()
break
return {"quality_score": quality_score, "quality_reasoning": quality_reasoning}
| promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_generation_score.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_generation_score.py",
"repo_id": "promptflow",
"token_count": 358
} | 8 |
from promptflow import tool
@tool
def read_file(file_path: str) -> str:
"""
This tool opens a file and reads its contents into a string.
:param file_path: the file path of the file to be read.
"""
with open(file_path, 'r', encoding="utf8") as f:
file = f.read()
return file
| promptflow/examples/flows/integrations/azure-ai-language/analyze_documents/read_file.py/0 | {
"file_path": "promptflow/examples/flows/integrations/azure-ai-language/analyze_documents/read_file.py",
"repo_id": "promptflow",
"token_count": 114
} | 9 |
import sys
from io import StringIO
import functools
import logging
import ast
from typing import Dict, Optional
logger = logging.getLogger(__name__)
@functools.lru_cache(maxsize=None)
def warn_once() -> None:
# Warn that the PythonREPL
logger.warning("Python REPL can execute arbitrary code. Use with caution.")
COMMAND_EXECUTION_FUNCTIONS = ["system", "exec", "execfile", "eval"]
class PythonValidation:
def __init__(
self,
allow_imports: bool = False,
allow_command_exec: bool = False,
):
"""Initialize a PALValidation instance.
Args:
allow_imports (bool): Allow import statements.
allow_command_exec (bool): Allow using known command execution functions.
"""
self.allow_imports = allow_imports
self.allow_command_exec = allow_command_exec
def validate_code(self, code: str) -> None:
try:
code_tree = ast.parse(code)
except (SyntaxError, UnicodeDecodeError):
raise ValueError(f"Generated code is not valid python code: {code}")
except TypeError:
raise ValueError(
f"Generated code is expected to be a string, "
f"instead found {type(code)}"
)
except OverflowError:
raise ValueError(
f"Generated code too long / complex to be parsed by ast: {code}"
)
has_imports = False
top_level_nodes = list(ast.iter_child_nodes(code_tree))
for node in top_level_nodes:
if isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom):
has_imports = True
if not self.allow_imports and has_imports:
raise ValueError(f"Generated code has disallowed imports: {code}")
if (
not self.allow_command_exec
or not self.allow_imports
):
for node in ast.walk(code_tree):
if (
(not self.allow_command_exec)
and isinstance(node, ast.Call)
and (
(
hasattr(node.func, "id")
and node.func.id in COMMAND_EXECUTION_FUNCTIONS
)
or (
isinstance(node.func, ast.Attribute)
and node.func.attr in COMMAND_EXECUTION_FUNCTIONS
)
)
):
raise ValueError(
f"Found illegal command execution function "
f"{node.func.id} in code {code}"
)
if (not self.allow_imports) and (
isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom)
):
raise ValueError(f"Generated code has disallowed imports: {code}")
class PythonREPL:
"""Simulates a standalone Python REPL."""
def __init__(self) -> None:
self.globals: Optional[Dict] = globals()
self.locals: Optional[Dict] = None
self.code_validations = PythonValidation(allow_imports=True)
def run(self, command: str) -> str:
"""Run command with own globals/locals and returns anything printed."""
# Warn against dangers of PythonREPL
warn_once()
self.code_validations.validate_code(command)
old_stdout = sys.stdout
sys.stdout = my_stdout = StringIO()
try:
exec(command, self.globals, self.locals)
sys.stdout = old_stdout
output = my_stdout.getvalue()
except Exception as e:
sys.stdout = old_stdout
output = repr(e)
print(output)
return output
python_repl = PythonREPL()
def python(command: str):
"""
A Python shell. Use this to execute python commands. Input should be a valid python command.
If you want to see the output of a value, you should print it out with `print(...)`.
"""
command = command.strip().strip("```")
return python_repl.run(command)
| promptflow/examples/flows/standard/autonomous-agent/python_repl.py/0 | {
"file_path": "promptflow/examples/flows/standard/autonomous-agent/python_repl.py",
"repo_id": "promptflow",
"token_count": 1977
} | 10 |
from promptflow import tool
@tool
def default_result(question: str) -> str:
return f"I'm not familiar with your query: {question}."
| promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py/0 | {
"file_path": "promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py",
"repo_id": "promptflow",
"token_count": 42
} | 11 |
*.ipynb
.venv/
.data/
.env
.vscode/
outputs/
connection.json | promptflow/examples/flows/standard/customer-intent-extraction/.amlignore/0 | {
"file_path": "promptflow/examples/flows/standard/customer-intent-extraction/.amlignore",
"repo_id": "promptflow",
"token_count": 30
} | 12 |
# system:
As an AI assistant, your task involves interpreting images and responding to questions about the image.
Remember to provide accurate answers based on the information present in the image.
# user:
{{question}}
![image]({{test_image}})
| promptflow/examples/flows/standard/describe-image/question_on_image.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/describe-image/question_on_image.jinja2",
"repo_id": "promptflow",
"token_count": 59
} | 13 |
{{divided|join('')}} | promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2",
"repo_id": "promptflow",
"token_count": 9
} | 14 |
system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "print(1+1)"
}
user:
This a set of examples including question and the final answer:
{% for ex in examples %}
QUESTION: {{ ex.question }}
CODE:
{{ ex.code }}
{% endfor %}
Now come to the real task, make sure return a valid json. The json should contain a key named "code" and the value is the python code. For example:
{
"code": "print(1+1)"
}
QUESTION: {{ question }}
CODE:
| promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2",
"repo_id": "promptflow",
"token_count": 206
} | 15 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
entity_type:
type: string
default: job title
text:
type: string
default: Maxime is a data scientist at Auto Dataset, and his wife is a finance
manager in the same company.
outputs:
entities:
type: string
reference: ${cleansing.output}
nodes:
- name: NER_LLM
type: llm
source:
type: code
path: NER_LLM.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 64
text: ${inputs.text}
entity_type: ${inputs.entity_type}
connection: open_ai_connection
api: chat
- name: cleansing
type: python
source:
type: code
path: cleansing.py
inputs:
entities_str: ${NER_LLM.output}
environment:
python_requirements_txt: requirements.txt | promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 370
} | 16 |
#!/bin/bash
# <promptflow_install>
pip install -r requirements.txt
# </promptflow_install>
pip list | promptflow/examples/setup.sh/0 | {
"file_path": "promptflow/examples/setup.sh",
"repo_id": "promptflow",
"token_count": 38
} | 17 |
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: My First Tool
description: This is my first tool
type: python
| promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml",
"repo_id": "promptflow",
"token_count": 115
} | 18 |
import pytest
import unittest
from promptflow.contracts.types import FilePath
from my_tool_package.tools.tool_with_file_path_input import my_tool
@pytest.fixture
def my_file_path_input() -> FilePath:
my_file_path_input = FilePath("tests.test_utils.hello_method.py")
return my_file_path_input
class TestToolWithFilePathInput:
def test_tool_with_file_path_input(self, my_file_path_input):
result = my_tool(my_file_path_input, input_text="Microsoft")
assert result == "Hello Microsoft"
# Run the unit tests
if __name__ == "__main__":
unittest.main()
| promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_file_path_input.py/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_file_path_input.py",
"repo_id": "promptflow",
"token_count": 214
} | 19 |
import logging
import os
import subprocess
import sys
import time
import traceback
module_logger = logging.getLogger(__name__)
class Color:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
END = "\033[0m"
def print_red(message):
print(Color.RED + message + Color.END)
def print_blue(message):
print(Color.BLUE + message + Color.END)
def get_test_files(testpath):
if os.path.isfile(testpath):
return [testpath]
else:
res = []
for root, dirs, files in os.walk(testpath):
module_logger.debug("Searching %s for files ending in 'tests.py'", root)
res.extend([os.path.join(root, file) for file in files if file.endswith("tests.py")])
return res
def retry(fn, num_attempts=3):
if num_attempts <= 0:
raise Exception("Illegal num_attempts: {}".format(num_attempts))
count = 0
for _ in range(0, num_attempts):
try:
return fn()
except Exception:
count += 1
print("Execution failed on attempt {} out of {}".format(count, num_attempts))
print("Exception trace:")
traceback.print_exc()
if count == num_attempts:
print("Execution failed after {} attempts".format(count))
raise
def _run_command(
commands,
cwd=None,
stderr=subprocess.STDOUT,
shell=False,
env=None,
stream_stdout=True,
throw_on_retcode=True,
logger=None,
):
if logger is None:
logger = module_logger
if cwd is None:
cwd = os.getcwd()
t0 = time.perf_counter()
try:
logger.debug("[RunCommand]Executing {0} in {1}".format(commands, cwd))
out = ""
p = subprocess.Popen(commands, stdout=subprocess.PIPE, stderr=stderr, cwd=cwd, shell=shell, env=env)
for line in p.stdout:
line = line.decode("utf-8").rstrip()
if line and line.strip():
logger.debug(line)
if stream_stdout:
sys.stdout.write(line)
sys.stdout.write("\n")
out += line
out += "\n"
p.communicate()
retcode = p.poll()
if throw_on_retcode:
if retcode:
raise subprocess.CalledProcessError(retcode, p.args, output=out, stderr=p.stderr)
return retcode, out
finally:
t1 = time.perf_counter()
logger.debug("[RunCommand] Execution took {0}s for {1} in {2}".format(t1 - t0, commands, cwd))
def run_command(
commands, cwd=None, stderr=subprocess.STDOUT, shell=False, stream_stdout=True, throw_on_retcode=True, logger=None
):
return _run_command(
commands,
cwd=cwd,
stderr=stderr,
shell=shell,
stream_stdout=stream_stdout,
throw_on_retcode=throw_on_retcode,
logger=logger,
)
| promptflow/scripts/building/utils.py/0 | {
"file_path": "promptflow/scripts/building/utils.py",
"repo_id": "promptflow",
"token_count": 1477
} | 20 |
#!/usr/bin/env bash
#---------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#---------------------------------------------------------------------------------------------
#
# Bash script to install the prompt flow
#
INSTALL_SCRIPT_URL="https://promptflowartifact.blob.core.windows.net/linux-install-scripts/install.py"
_TTY=/dev/tty
install_script=$(mktemp -t promptflow_install_tmp_XXXXXX) || exit
echo "Downloading prompt flow install script from $INSTALL_SCRIPT_URL to $install_script."
curl -# $INSTALL_SCRIPT_URL > $install_script || exit
python_cmd=python3
if ! command -v python3 >/dev/null 2>&1
then
echo "ERROR: python3 not found."
echo "If python3 is available on the system, add it to PATH."
exit 1
fi
chmod 775 $install_script
echo "Running install script."
$python_cmd $install_script < $_TTY
| promptflow/scripts/installer/curl_install_pypi/install/0 | {
"file_path": "promptflow/scripts/installer/curl_install_pypi/install",
"repo_id": "promptflow",
"token_count": 270
} | 21 |
- name: {{ step_name }}
working-directory: {{ working_dir }}
run: |
AOAI_API_KEY=${{ '{{' }} secrets.AOAI_API_KEY_TEST }}
AOAI_API_ENDPOINT=${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}
AOAI_API_ENDPOINT=$(echo ${AOAI_API_ENDPOINT//\//\\/})
if [[ -e .env.example ]]; then
echo "env replacement"
sed -i -e "s/<your_AOAI_key>/$AOAI_API_KEY/g" -e "s/<your_AOAI_endpoint>/$AOAI_API_ENDPOINT/g" .env.example
mv .env.example .env
fi
| promptflow/scripts/readme/ghactions_driver/workflow_steps/step_create_env.yml.jinja2/0 | {
"file_path": "promptflow/scripts/readme/ghactions_driver/workflow_steps/step_create_env.yml.jinja2",
"repo_id": "promptflow",
"token_count": 237
} | 22 |
# Generate Readme file for the examples folder
import json
from pathlib import Path
import workflow_generator
import readme_generator
from jinja2 import Environment, FileSystemLoader
from ghactions_driver.readme_step import ReadmeStepsManage
from operator import itemgetter
import argparse
import sys
import os
import re
BRANCH = "main"
def get_notebook_readme_description(notebook) -> str:
"""
Set each ipynb metadata description at .metadata.description
"""
try:
# read in notebook
with open(notebook, "r", encoding="utf-8") as f:
data = json.load(f)
return data["metadata"]["description"]
except Exception:
print(f"{notebook} metadata description not set")
return ""
def get_readme_description_first_sentence(readme) -> str:
"""
Get each readme first sentence of first paragraph
"""
try:
with open(readme, "r", encoding="utf-8") as f:
# read first line
line = f.readline()
sentence = ""
while True:
line = f.readline()
if line.startswith("#"):
line = ""
# skip metadata section
if line.startswith("---") or line.startswith("resources"):
line = ""
if line.strip() == "" and sentence != "":
break
elif "." in line:
sentence += " " + line.split(".")[0].strip()
break
else:
if sentence == "":
sentence += line.strip()
elif line.strip() != "":
sentence += " " + line.strip()
return sentence
except Exception:
print(f"Error during reading {readme}")
return ""
def write_readme(workflow_telemetries, readme_telemetries):
global BRANCH
ReadmeStepsManage.git_base_dir()
readme_file = Path(ReadmeStepsManage.git_base_dir()) / "examples/README.md"
quickstarts = {
"readmes": [],
"notebooks": [],
}
tutorials = {
"readmes": [],
"notebooks": [],
}
flows = {
"readmes": [],
"notebooks": [],
}
evaluations = {
"readmes": [],
"notebooks": [],
}
chats = {
"readmes": [],
"notebooks": [],
}
toolusecases = {
"readmes": [],
"notebooks": [],
}
connections = {
"readmes": [],
"notebooks": [],
}
for workflow_telemetry in workflow_telemetries:
notebook_name = f"{workflow_telemetry.name}.ipynb"
gh_working_dir = workflow_telemetry.gh_working_dir
pipeline_name = workflow_telemetry.workflow_name
yaml_name = f"{pipeline_name}.yml"
# For workflows, open ipynb as raw json and
# setup description at .metadata.description
description = get_notebook_readme_description(workflow_telemetry.notebook)
notebook_path = gh_working_dir.replace("examples/", "") + f"/{notebook_name}"
if gh_working_dir.startswith("examples/flows/standard"):
flows["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/connections"):
connections["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/flows/evaluation"):
evaluations["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/tutorials"):
if "quickstart" in notebook_name:
quickstarts["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
tutorials["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/flows/chat"):
chats["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/tools/use-cases"):
toolusecases["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
print(f"Unknown workflow type: {gh_working_dir}")
# Adjust tutorial names:
for readme_telemetry in readme_telemetries:
if readme_telemetry.readme_name.endswith("README.md"):
notebook_name = readme_telemetry.readme_folder.split("/")[-1]
else:
notebook_name = readme_telemetry.readme_name.split("/")[-1].replace(
".md", ""
)
notebook_path = readme_telemetry.readme_name.replace("examples/", "")
pipeline_name = readme_telemetry.workflow_name
yaml_name = f"{readme_telemetry.workflow_name}.yml"
description = get_readme_description_first_sentence(
readme_telemetry.readme_name
)
readme_folder = readme_telemetry.readme_folder
if readme_folder.startswith("examples/flows/standard"):
flows["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/connections"):
connections["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/flows/evaluation"):
evaluations["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/tutorials"):
if "quickstart" in notebook_name:
quickstarts["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
tutorials["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/flows/chat"):
chats["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/tools/use-cases"):
toolusecases["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
print(f"Unknown workflow type: {readme_folder}")
quickstarts["notebooks"] = sorted(
quickstarts["notebooks"],
key=itemgetter("name"),
reverse=True,
)
replacement = {
"branch": BRANCH,
"tutorials": tutorials,
"flows": flows,
"evaluations": evaluations,
"chats": chats,
"toolusecases": toolusecases,
"connections": connections,
"quickstarts": quickstarts,
}
print("writing README.md...")
env = Environment(
loader=FileSystemLoader(
Path(ReadmeStepsManage.git_base_dir())
/ "scripts/readme/ghactions_driver/readme_templates"
)
)
template = env.get_template("README.md.jinja2")
with open(readme_file, "w") as f:
f.write(template.render(replacement))
print("finished writing README.md")
def main(check):
if check:
# Disable print
sys.stdout = open(os.devnull, "w")
input_glob = ["examples/**/*.ipynb"]
workflow_telemetry = []
workflow_generator.main(input_glob, workflow_telemetry, check=check)
input_glob_readme = [
"examples/flows/**/README.md",
"examples/connections/**/README.md",
"examples/tutorials/e2e-development/*.md",
"examples/tutorials/flow-fine-tuning-evaluation/*.md",
"examples/tutorials/**/README.md",
"examples/tools/use-cases/**/README.md",
]
# exclude the readme since this is 3p integration folder, pipeline generation is not included
input_glob_readme_exclude = ["examples/flows/integrations/**/README.md"]
readme_telemetry = []
readme_generator.main(
input_glob_readme, input_glob_readme_exclude, readme_telemetry
)
write_readme(workflow_telemetry, readme_telemetry)
if check:
output_object = {}
for workflow in workflow_telemetry:
workflow_items = re.split(r"\[|,| |\]", workflow.path_filter)
workflow_items = list(filter(None, workflow_items))
output_object[workflow.workflow_name] = []
for item in workflow_items:
if item == "examples/*requirements.txt":
output_object[workflow.workflow_name].append(
"examples/requirements.txt"
)
output_object[workflow.workflow_name].append(
"examples/dev_requirements.txt"
)
continue
output_object[workflow.workflow_name].append(item)
for readme in readme_telemetry:
output_object[readme.workflow_name] = []
readme_items = re.split(r"\[|,| |\]", readme.path_filter)
readme_items = list(filter(None, readme_items))
for item in readme_items:
if item == "examples/*requirements.txt":
output_object[readme.workflow_name].append(
"examples/requirements.txt"
)
output_object[readme.workflow_name].append(
"examples/dev_requirements.txt"
)
continue
output_object[readme.workflow_name].append(item)
# enable output
sys.stdout = sys.__stdout__
return output_object
else:
return ""
if __name__ == "__main__":
# setup argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"-c", "--check", action="store_true", help="Check what file is affected"
)
args = parser.parse_args()
output = main(args.check)
print(json.dumps(output))
| promptflow/scripts/readme/readme.py/0 | {
"file_path": "promptflow/scripts/readme/readme.py",
"repo_id": "promptflow",
"token_count": 7045
} | 23 |
include {{ package_name }}/yamls/*.yaml | promptflow/scripts/tool/templates/MANIFEST.in.j2/0 | {
"file_path": "promptflow/scripts/tool/templates/MANIFEST.in.j2",
"repo_id": "promptflow",
"token_count": 14
} | 24 |
import inspect
from enum import Enum, EnumMeta
from typing import Callable, Union, get_args, get_origin
from promptflow.contracts.tool import ConnectionType, InputDefinition, ValueType, ToolType
from promptflow.contracts.types import PromptTemplate
def value_to_str(val):
if val is inspect.Parameter.empty:
# For empty case, default field will be skipped when dumping to json
return None
if val is None:
# Dump default: "" in json to avoid UI validation error
return ""
if isinstance(val, Enum):
return val.value
return str(val)
def resolve_annotation(anno) -> Union[str, list]:
"""Resolve the union annotation to type list."""
origin = get_origin(anno)
if origin != Union:
return anno
# Optional[Type] is Union[Type, NoneType], filter NoneType out
args = [arg for arg in get_args(anno) if arg != type(None)] # noqa: E721
return args[0] if len(args) == 1 else args
def param_to_definition(param, value_type) -> (InputDefinition, bool):
default_value = param.default
enum = None
custom_type = None
# Get value type and enum from default if no annotation
if default_value is not inspect.Parameter.empty and value_type == inspect.Parameter.empty:
value_type = default_value.__class__ if isinstance(default_value, Enum) else type(default_value)
# Extract enum for enum class
if isinstance(value_type, EnumMeta):
enum = [str(option.value) for option in value_type]
value_type = str
is_connection = False
if ConnectionType.is_connection_value(value_type):
if ConnectionType.is_custom_strong_type(value_type):
typ = ["CustomConnection"]
custom_type = [value_type.__name__]
else:
typ = [value_type.__name__]
is_connection = True
elif isinstance(value_type, list):
if not all(ConnectionType.is_connection_value(t) for t in value_type):
typ = [ValueType.OBJECT]
else:
custom_connection_added = False
typ = []
custom_type = []
for t in value_type:
if ConnectionType.is_custom_strong_type(t):
if not custom_connection_added:
custom_connection_added = True
typ.append("CustomConnection")
custom_type.append(t.__name__)
else:
typ.append(t.__name__)
is_connection = True
else:
typ = [ValueType.from_type(value_type)]
return InputDefinition(type=typ, default=value_to_str(default_value),
description=None, enum=enum, custom_type=custom_type), is_connection
def function_to_interface(f: Callable, tool_type, initialize_inputs=None) -> tuple:
sign = inspect.signature(f)
all_inputs = {}
input_defs = {}
connection_types = []
# Initialize the counter for prompt template
prompt_template_count = 0
# Collect all inputs from class and func
if initialize_inputs:
if any(k for k in initialize_inputs if k in sign.parameters):
raise Exception(f'Duplicate inputs found from {f.__name__!r} and "__init__()"!')
all_inputs = {**initialize_inputs}
all_inputs.update(
{
k: v
for k, v in sign.parameters.items()
if k != "self" and v.kind != v.VAR_KEYWORD and v.kind != v.VAR_POSITIONAL # TODO: Handle these cases
}
)
# Resolve inputs to definitions.
for k, v in all_inputs.items():
# Get value type from annotation
value_type = resolve_annotation(v.annotation)
if value_type is PromptTemplate:
# custom llm tool has prompt template as input, skip it
prompt_template_count += 1
continue
input_def, is_connection = param_to_definition(v, value_type)
input_defs[k] = input_def
if is_connection:
connection_types.append(input_def.type)
# Check PromptTemplate input:
# a. For custom llm tool, there should be exactly one PromptTemplate input
# b. For python tool, PromptTemplate input is not supported
if tool_type == ToolType.PYTHON and prompt_template_count > 0:
raise Exception(f"Input of type 'PromptTemplate' not supported in python tool '{f.__name__}'. ")
if tool_type == ToolType.CUSTOM_LLM and prompt_template_count == 0:
raise Exception(f"No input of type 'PromptTemplate' was found in custom llm tool '{f.__name__}'. ")
if tool_type == ToolType.CUSTOM_LLM and prompt_template_count > 1:
raise Exception(f"Multiple inputs of type 'PromptTemplate' were found in '{f.__name__}'. "
"Only one input of this type is expected.")
outputs = {}
# Note: We don't have output definition now
# outputs = {"output": OutputDefinition("output", [ValueType.from_type(type(sign.return_annotation))], "", True)}
# if is_dataclass(sign.return_annotation):
# for f in fields(sign.return_annotation):
# outputs[f.name] = OutputDefinition(f.name, [ValueType.from_type(
# type(getattr(sign.return_annotation, f.name)))], "", False)
return input_defs, outputs, connection_types
| promptflow/scripts/tool/utils/tool_utils.py/0 | {
"file_path": "promptflow/scripts/tool/utils/tool_utils.py",
"repo_id": "promptflow",
"token_count": 2128
} | 25 |
import json
import os
import pytest
import sys
from pathlib import Path
from pytest_mock import MockerFixture # noqa: E402
from tests.utils import verify_url_exists
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import ConnectionManager
from promptflow.connections import CustomConnection, OpenAIConnection, SerpConnection
from promptflow.contracts.multimedia import Image
from promptflow.tools.aoai import AzureOpenAI
PROMOTFLOW_ROOT = Path(__file__).absolute().parents[1]
CONNECTION_FILE = (PROMOTFLOW_ROOT / "connections.json").resolve().absolute().as_posix()
root_str = str(PROMOTFLOW_ROOT.resolve().absolute())
if root_str not in sys.path:
sys.path.insert(0, root_str)
# connection
@pytest.fixture(autouse=True)
def use_secrets_config_file(mocker: MockerFixture):
mocker.patch.dict(os.environ, {"PROMPTFLOW_CONNECTIONS": CONNECTION_FILE})
@pytest.fixture
def azure_open_ai_connection():
return ConnectionManager().get("azure_open_ai_connection")
@pytest.fixture
def aoai_provider(azure_open_ai_connection) -> AzureOpenAI:
aoai_provider = AzureOpenAI(azure_open_ai_connection)
return aoai_provider
@pytest.fixture
def open_ai_connection():
return ConnectionManager().get("open_ai_connection")
@pytest.fixture
def serp_connection():
return ConnectionManager().get("serp_connection")
def verify_om_llm_custom_connection(connection: CustomConnection) -> bool:
'''Verify that there is a MIR endpoint up and available for the Custom Connection.
We explicitly do not pass the endpoint key to avoid the delay in generating a response.
'''
return verify_url_exists(connection.configs['endpoint_url'])
@pytest.fixture
def gpt2_custom_connection():
return ConnectionManager().get("gpt2_connection")
@pytest.fixture
def open_model_llm_ws_service_connection() -> bool:
try:
creds_custom_connection: CustomConnection = ConnectionManager().get("open_source_llm_ws_service_connection")
subs = json.loads(creds_custom_connection.secrets['service_credential'])
for key, value in subs.items():
os.environ[key] = value
return True
except Exception as e:
print(f"""Something failed setting environment variables for service credentials.
Error: {e}""")
return False
@pytest.fixture(autouse=True)
def skip_if_no_api_key(request, mocker):
mocker.patch.dict(os.environ, {"PROMPTFLOW_CONNECTIONS": CONNECTION_FILE})
if request.node.get_closest_marker('skip_if_no_api_key'):
conn_name = request.node.get_closest_marker('skip_if_no_api_key').args[0]
connection = request.getfixturevalue(conn_name)
# if dummy placeholder key, skip.
if isinstance(connection, OpenAIConnection) or isinstance(connection, SerpConnection):
if "-api-key" in connection.api_key:
pytest.skip('skipped because no key')
elif isinstance(connection, CustomConnection):
if "endpoint_api_key" not in connection.secrets or "-api-key" in connection.secrets["endpoint_api_key"]:
pytest.skip('skipped because no key')
# Verify Custom Connections, but only those used by the Open_Model_LLM Tool
if "endpoint_url" in connection.configs and "-endpoint-url" not in connection.configs["endpoint_url"]:
if not verify_om_llm_custom_connection(connection):
pytest.skip('skipped because the connection is not valid')
# example prompts
@pytest.fixture
def example_prompt_template() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/marketing_writer/prompt.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_prompt_template_with_name_in_roles() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_name_in_roles.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def chat_history() -> list:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/marketing_writer/history.json") as f:
history = json.load(f)
return history
@pytest.fixture
def example_prompt_template_with_function() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_function.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_prompt_template_with_image() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_image.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_image() -> Image:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/images/number10.jpg", "rb") as f:
image = Image(f.read())
return image
# functions
@pytest.fixture
def functions():
return [
{
"name": "get_current_weather",
"parameters": {
"type": "object",
"properties": {},
},
}
]
@pytest.fixture
def azure_content_safety_connection():
return ConnectionManager().get("azure_content_safety_connection")
| promptflow/src/promptflow-tools/tests/conftest.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/conftest.py",
"repo_id": "promptflow",
"token_count": 2009
} | 26 |
import pytest
from promptflow.tools.openai_gpt4v import OpenAI
@pytest.fixture
def openai_provider(open_ai_connection) -> OpenAI:
return OpenAI(open_ai_connection)
@pytest.mark.usefixtures("use_secrets_config_file")
@pytest.mark.skip_if_no_api_key("open_ai_connection")
class TestOpenAIGPT4V:
def test_openai_gpt4v_chat(self, openai_provider, example_prompt_template_with_image, example_image):
result = openai_provider.chat(
prompt=example_prompt_template_with_image,
model="gpt-4-vision-preview",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
assert "10" == result
def test_openai_gpt4v_stream_chat(self, openai_provider, example_prompt_template_with_image, example_image):
result = openai_provider.chat(
prompt=example_prompt_template_with_image,
model="gpt-4-vision-preview",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
answer = ""
while True:
try:
answer += next(result)
except Exception:
break
assert "10" == result
| promptflow/src/promptflow-tools/tests/test_openai_gpt4v.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/test_openai_gpt4v.py",
"repo_id": "promptflow",
"token_count": 626
} | 27 |
import argparse
import json
from promptflow._cli._params import add_param_set_positional, base_params
from promptflow._cli._utils import activate_action, list_of_dict_to_dict
from promptflow._sdk._configuration import Configuration, InvalidConfigValue
from promptflow._sdk._utils import print_red_error
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
def add_config_set(subparsers):
epilog = """
Examples:
# Config connection provider to azure workspace for current user:
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
""" # noqa: E501
activate_action(
name="set",
description="Set prompt flow configs for current user.",
epilog=epilog,
add_params=[add_param_set_positional] + base_params,
subparsers=subparsers,
help_message="Set prompt flow configs for current user, configs will be stored at ~/.promptflow/pf.yaml.",
action_param_name="sub_action",
)
def add_config_show(subparsers):
epilog = """
Examples:
# Show prompt flow for current user:
pf config show
"""
activate_action(
name="show",
description="Show prompt flow configs for current user.",
epilog=epilog,
add_params=base_params,
subparsers=subparsers,
help_message="Show prompt flow configs for current user.",
action_param_name="sub_action",
)
def add_config_parser(subparsers):
config_parser = subparsers.add_parser(
"config", description="A CLI tool to set prompt flow configs for current user.", help="pf config"
)
subparsers = config_parser.add_subparsers()
add_config_set(subparsers)
add_config_show(subparsers)
config_parser.set_defaults(action="config")
def dispatch_config_commands(args: argparse.Namespace):
if args.sub_action == "set":
set_config(args)
if args.sub_action == "show":
show_config()
def set_config(args):
params_override = list_of_dict_to_dict(args.params_override)
for k, v in params_override.items():
logger.debug("Setting config %s to %s", k, v)
try:
Configuration.get_instance().set_config(k, v)
print(f"Set config {args.params_override} successfully.")
except InvalidConfigValue as e:
error_message = f"Invalid config value {v!r} for {k!r}: {str(e)}"
print_red_error(error_message)
def show_config():
configs = Configuration.get_instance().get_all()
print(json.dumps(configs, indent=4))
| promptflow/src/promptflow/promptflow/_cli/_pf/_config.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_config.py",
"repo_id": "promptflow",
"token_count": 1038
} | 28 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from promptflow._version import VERSION
USER_AGENT = "{}/{}".format("promptflow-cli", VERSION)
| promptflow/src/promptflow/promptflow/_cli/_user_agent.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_user_agent.py",
"repo_id": "promptflow",
"token_count": 56
} | 29 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
groundtruth:
type: string
prediction:
type: string
outputs:
results:
type: string
reference: ${line_process.output}
nodes:
- name: line_process
type: python
source:
type: code
path: line_process.py
inputs:
groundtruth: ${inputs.groundtruth}
prediction: ${inputs.prediction}
- name: aggregate
type: python
source:
type: code
path: aggregate.py
inputs:
processed_results: ${line_process.output}
aggregation: true
environment:
python_requirements_txt: requirements.txt
| promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 225
} | 30 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import hashlib
import json
from dataclasses import dataclass
from typing import Callable, List
from promptflow._utils.logger_utils import flow_logger
from promptflow.contracts.run_info import RunInfo
from promptflow.storage import AbstractCacheStorage, AbstractRunStorage
PROMPTFLOW_HASH_ATTR = "__promptflow_hash_func"
def get_calculate_cache_func(tool_func):
return getattr(tool_func, PROMPTFLOW_HASH_ATTR, None)
def set_calculate_cache_func(tool_func, calculate_cache_func):
setattr(tool_func, PROMPTFLOW_HASH_ATTR, calculate_cache_func)
def enable_cache(calculate_cache_func):
def decorator_enable_cache(func):
set_calculate_cache_func(func, calculate_cache_func)
return func
return decorator_enable_cache
@dataclass
class CacheInfo:
hash_id: str = None
cache_string: str = None
@dataclass
class CacheResult:
result: object = None
cached_run_id: str = None
cached_flow_run_id: str = None
hit_cache: bool = False
class AbstractCacheManager:
@staticmethod
def init_from_env() -> "AbstractCacheManager":
# TODO: Return CacheManager after local execution is enabled.
return DummyCacheManager()
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
raise NotImplementedError("AbstractCacheManager has not implemented method calculate_cache_info.")
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
raise NotImplementedError("AbstractCacheManager has not implemented method get_cache_result.")
def persist_result(self, run_info: RunInfo, hash_id: str, cache_string: str, flow_id: str):
raise NotImplementedError("AbstractCacheManager has not implemented method persist_result.")
class DummyCacheManager(AbstractCacheManager):
def __init__(self):
pass
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
return None
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
return None
def persist_result(self, run_info: RunInfo, hash_id: str, cache_string: str, flow_id: str):
pass
class CacheManager(AbstractCacheManager):
def __init__(self, run_storage: AbstractRunStorage, cache_storage: AbstractCacheStorage):
self._run_storage = run_storage
self._cache_storage = cache_storage
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
cache_function = get_calculate_cache_func(tool_method)
# Cache function is not registered with this tool.
if cache_function is None:
return None
# Calculate cache string and hash id.
try:
cache_string = cache_function(*args, **kwargs)
except Exception as ex:
flow_logger.warning(f"Failed to calculate cache string. Exception: {ex}")
return None
# Add flow_id and tool_name in the cache string.
# So that different flow_id and tool_name cannot reuse.
other_cache_string = json.dumps(
{
"flow_id": flow_id,
"tool_name": tool_method.__qualname__,
}
)
cache_string += other_cache_string
hash_id = self._calculate_hash_id(cache_string)
return CacheInfo(hash_id=hash_id, cache_string=cache_string)
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
hash_id = cache_info.hash_id
# Query if cache result existed by hash_id.
cache_result_list: List[CacheInfo] = self._cache_storage.get_cache_record_list(hash_id=hash_id)
if len(cache_result_list) == 0:
return None
# Get the latest cache result.
cache_result = sorted(cache_result_list, reverse=True, key=lambda i: i.end_time)[0]
try:
cached_run_info = self._run_storage.get_node_run(cache_result.run_id)
except Exception as ex:
flow_logger.warning(
f"Failed to get cached run result. \
Run id:{cached_run_info.run_id}, flow run id: {cached_run_info.flow_run_id} \
Exception: {ex}"
)
return None
flow_logger.info(
f"Hit cached result of previous run: run id: \
{cached_run_info.run_id}, flow run id: {cached_run_info.flow_run_id}"
)
return CacheResult(
result=cached_run_info.result,
cached_run_id=cached_run_info.run_id,
cached_flow_run_id=cached_run_info.flow_run_id,
hit_cache=True,
)
def persist_result(self, run_info: RunInfo, cache_info: CacheInfo, flow_id: str):
self._cache_storage.persist_cache_result(run_info, cache_info.hash_id, cache_info.cache_string, flow_id)
@staticmethod
def _calculate_hash_id(cache_string: str):
return hashlib.sha1(cache_string.encode("utf-8")).hexdigest()
| promptflow/src/promptflow/promptflow/_core/cache_manager.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/cache_manager.py",
"repo_id": "promptflow",
"token_count": 2075
} | 31 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
try:
from flask_restx import Api, Namespace, Resource, fields # noqa: F401
except ImportError as ex:
from promptflow.exceptions import UserErrorException
raise UserErrorException(f"Please try 'pip install promptflow[pfs]' to install dependency, {ex.msg}.")
| promptflow/src/promptflow/promptflow/_sdk/_service/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/__init__.py",
"repo_id": "promptflow",
"token_count": 138
} | 32 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from flask import Blueprint, current_app as app, request
from promptflow._sdk._serving.monitor.flow_monitor import FlowMonitor
def is_monitoring_enabled() -> bool:
enabled = False
if request.endpoint in app.view_functions:
view_func = app.view_functions[request.endpoint]
enabled = hasattr(view_func, "_enable_monitoring")
return enabled
def construct_monitor_blueprint(flow_monitor: FlowMonitor):
"""Construct monitor blueprint."""
monitor_blueprint = Blueprint("monitor_blueprint", __name__)
@monitor_blueprint.before_app_request
def start_monitoring():
if not is_monitoring_enabled():
return
flow_monitor.start_monitoring()
@monitor_blueprint.after_app_request
def finish_monitoring(response):
if not is_monitoring_enabled():
return response
flow_monitor.finish_monitoring(response.status_code)
return response
return monitor_blueprint
| promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/monitor_blueprint.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/monitor_blueprint.py",
"repo_id": "promptflow",
"token_count": 365
} | 33 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import logging
from promptflow.contracts.flow import Flow, FlowInputDefinition, FlowOutputDefinition
from promptflow.contracts.tool import ValueType
type_mapping = {
ValueType.INT: "integer",
ValueType.DOUBLE: "number",
ValueType.BOOL: "boolean",
ValueType.STRING: "string",
ValueType.LIST: "array",
ValueType.OBJECT: "object",
ValueType.IMAGE: "object", # Dump as object as portal test page can't handle image now
}
def generate_input_field_schema(input: FlowInputDefinition) -> dict:
field_schema = {"type": type_mapping[input.type]}
if input.description:
field_schema["description"] = input.description
if input.default:
field_schema["default"] = input.default
if input.type == ValueType.OBJECT:
field_schema["additionalProperties"] = {}
if input.type == ValueType.LIST:
field_schema["items"] = {"type": "object", "additionalProperties": {}}
return field_schema
def generate_output_field_schema(output: FlowOutputDefinition) -> dict:
field_schema = {"type": type_mapping[output.type]}
if output.description:
field_schema["description"] = output.description
if output.type == ValueType.OBJECT:
field_schema["additionalProperties"] = {}
if output.type == ValueType.LIST:
field_schema["items"] = {"type": "object", "additionalProperties": {}}
return field_schema
def generate_swagger(flow: Flow, samples, outputs_to_remove: list) -> dict:
"""convert a flow to swagger object."""
swagger = {"openapi": "3.0.0"}
swagger["info"] = {
"title": f"Promptflow[{flow.name}] API",
"version": "1.0.0",
"x-flow-name": str(flow.name),
}
swagger["components"] = {
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
}
}
}
swagger["security"] = [{"bearerAuth": []}]
input_schema = {"type": "object"}
request_body_required = False
if len(flow.inputs) > 0:
input_schema["properties"] = {}
input_schema["required"] = []
request_body_required = True
for name, input in flow.inputs.items():
if input.is_chat_input:
swagger["info"]["x-chat-input"] = name
swagger["info"]["x-flow-type"] = "chat"
if input.is_chat_history:
swagger["info"]["x-chat-history"] = name
input_schema["properties"][name] = generate_input_field_schema(input)
input_schema["required"].append(name)
output_schema = {"type": "object"}
if len(flow.outputs) > 0:
output_schema["properties"] = {}
for name, output in flow.outputs.items():
# skip evaluation only outputs in swagger
# TODO remove this if sdk removed this evaluation_only field
if output.evaluation_only:
continue
if output.is_chat_output:
swagger["info"]["x-chat-output"] = name
if outputs_to_remove and name in outputs_to_remove:
continue
output_schema["properties"][name] = generate_output_field_schema(output)
example = {}
if samples:
if isinstance(samples, list):
example = samples[0]
else:
logging.warning("samples should be a list of dict, but got %s, skipped.", type(samples))
swagger["paths"] = {
"/score": {
"post": {
"summary": f"run promptflow: {flow.name} with an given input",
"requestBody": {
"description": "promptflow input data",
"required": request_body_required,
"content": {
"application/json": {
"schema": input_schema,
"example": example, # need to check this based on the sample data
}
},
},
"responses": {
"200": {
"description": "successful operation",
"content": {
"application/json": {
"schema": output_schema,
}
},
},
"400": {
"description": "Invalid input",
},
"default": {
"description": "unexpected error",
},
},
}
}
}
return swagger
| promptflow/src/promptflow/promptflow/_sdk/_serving/swagger.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/swagger.py",
"repo_id": "promptflow",
"token_count": 2285
} | 34 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import shutil
import tempfile
import webbrowser
from dataclasses import asdict
from pathlib import Path
from typing import Optional
from promptflow._sdk._constants import VIS_HTML_TMPL, VIS_JS_BUNDLE_FILENAME
from promptflow._sdk._utils import render_jinja_template
from promptflow.contracts._run_management import VisualizationRender
def generate_html_string(data: dict) -> str:
visualization_render = VisualizationRender(data=data)
return render_jinja_template(VIS_HTML_TMPL, **asdict(visualization_render))
def try_to_open_html(html_path: str) -> None:
print(f"The HTML file is generated at {str(Path(html_path).resolve().absolute())!r}.")
print("Trying to view the result in a web browser...")
web_browser_opened = False
web_browser_opened = webbrowser.open(f"file://{html_path}")
if not web_browser_opened:
print(
f"Failed to visualize from the web browser, the HTML file locates at {html_path!r}.\n"
"You can manually open it with your web browser, or try SDK to visualize it."
)
else:
print("Successfully visualized from the web browser.")
def dump_js_bundle(html_path: str) -> None:
js_bundle_src_path = Path(__file__).parent / "data" / VIS_JS_BUNDLE_FILENAME
js_bundle_dst_path = Path(html_path).parent / VIS_JS_BUNDLE_FILENAME
shutil.copy(js_bundle_src_path, js_bundle_dst_path)
def dump_html(html_string: str, html_path: Optional[str] = None, open_html: bool = True) -> None:
if html_path is not None:
with open(html_path, "w") as f:
f.write(html_string)
else:
with tempfile.NamedTemporaryFile(prefix="pf-visualize-detail-", suffix=".html", delete=False) as f:
f.write(html_string.encode("utf-8"))
html_path = f.name
dump_js_bundle(html_path)
if open_html:
try_to_open_html(html_path)
| promptflow/src/promptflow/promptflow/_sdk/_visualize_functions.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_visualize_functions.py",
"repo_id": "promptflow",
"token_count": 751
} | 35 |
import json
import os
from pathlib import Path
from PIL import Image
import streamlit as st
from streamlit_quill import st_quill
from copy import copy
from types import GeneratorType
import time
from promptflow import load_flow
from promptflow._sdk._utils import dump_flow_result
from promptflow._utils.multimedia_utils import convert_multimedia_data_to_base64, persist_multimedia_data
from promptflow._sdk._submitter.utils import get_result_output, resolve_generator
from utils import dict_iter_render_message, parse_list_from_html, parse_image_content, render_single_dict_message
invoker = None
generator_record = {}
def start():
def clear_chat() -> None:
st.session_state.messages = []
def render_message(role, message_items):
with st.chat_message(role):
if is_chat_flow:
render_single_dict_message(message_items)
else:
dict_iter_render_message(message_items)
def show_conversation() -> None:
if "messages" not in st.session_state:
st.session_state.messages = []
st.session_state.history = []
if st.session_state.messages:
for role, message_items in st.session_state.messages:
render_message(role, message_items)
def get_chat_history_from_session():
if "history" in st.session_state:
return st.session_state.history
return []
def post_process_dump_result(response, session_state_history):
response = resolve_generator(response, generator_record)
# Get base64 for multi modal object
resolved_outputs = {
k: convert_multimedia_data_to_base64(v, with_type=True, dict_type=True)
for k, v in response.output.items()
}
st.session_state.messages.append(("assistant", resolved_outputs))
session_state_history.update({"outputs": response.output})
st.session_state.history.append(session_state_history)
if is_chat_flow:
dump_path = Path(flow_path).parent
response.output = persist_multimedia_data(
response.output, base_dir=dump_path, sub_dir=Path(".promptflow/output")
)
dump_flow_result(flow_folder=dump_path, flow_result=response, prefix="chat")
return resolved_outputs
def submit(**kwargs) -> None:
st.session_state.messages.append(("user", kwargs))
session_state_history = dict()
session_state_history.update({"inputs": kwargs})
with container:
render_message("user", kwargs)
# Force append chat history to kwargs
if is_chat_flow:
response = run_flow({chat_history_input_name: get_chat_history_from_session(), **kwargs})
else:
response = run_flow(kwargs)
if is_streaming:
# Display assistant response in chat message container
with container:
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = f"{chat_output_name}:"
chat_output = response.output[chat_output_name]
if isinstance(chat_output, GeneratorType):
# Simulate stream of response with milliseconds delay
for chunk in get_result_output(chat_output, generator_record):
full_response += chunk + " "
time.sleep(0.05)
# Add a blinking cursor to simulate typing
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
post_process_dump_result(response, session_state_history)
return
resolved_outputs = post_process_dump_result(response, session_state_history)
with container:
render_message("assistant", resolved_outputs)
def run_flow(data: dict) -> dict:
global invoker
if not invoker:
if flow_path:
flow = Path(flow_path)
else:
flow = Path(__file__).parent / "flow"
if flow.is_dir():
os.chdir(flow)
else:
os.chdir(flow.parent)
invoker = load_flow(flow)
invoker.context.streaming = is_streaming
result = invoker.invoke(data)
return result
image = Image.open(Path(__file__).parent / "logo.png")
st.set_page_config(
layout="wide",
page_title=f"{flow_name} - Promptflow App",
page_icon=image,
menu_items={
'About': """
# This is a Promptflow App.
You can refer to [promptflow](https://github.com/microsoft/promptflow) for more information.
"""
}
)
# Set primary button color here since button color of the same form need to be identical in streamlit, but we only
# need Run/Chat button to be blue.
st.config.set_option("theme.primaryColor", "#0F6CBD")
st.title(flow_name)
st.divider()
st.chat_message("assistant").write("Hello, please input following flow inputs.")
container = st.container()
with container:
show_conversation()
with st.form(key='input_form', clear_on_submit=True):
settings_path = os.path.join(os.path.dirname(__file__), "settings.json")
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as file:
json_data = json.load(file)
environment_variables = list(json_data.keys())
for environment_variable in environment_variables:
secret_input = st.sidebar.text_input(label=environment_variable, type="password",
placeholder=f"Please input {environment_variable} here. "
f"If you input before, you can leave it blank.")
if secret_input != "":
os.environ[environment_variable] = secret_input
flow_inputs_params = {}
for flow_input, (default_value, value_type) in flow_inputs.items():
if value_type == "list":
st.text(flow_input)
input = st_quill(html=True, toolbar=["image"], key=flow_input,
placeholder='Please enter the list values and use the image icon to upload a picture. '
'Make sure to format each list item correctly with line breaks')
elif value_type == "image":
input = st.file_uploader(label=flow_input)
elif value_type == "string":
input = st.text_input(label=flow_input, placeholder=default_value)
else:
input = st.text_input(label=flow_input, placeholder=default_value)
flow_inputs_params.update({flow_input: copy(input)})
cols = st.columns(7)
submit_bt = cols[0].form_submit_button(label=label, type='primary')
clear_bt = cols[1].form_submit_button(label='Clear')
if submit_bt:
with st.spinner("Loading..."):
for flow_input, (default_value, value_type) in flow_inputs.items():
if value_type == "list":
input = parse_list_from_html(flow_inputs_params[flow_input])
flow_inputs_params.update({flow_input: copy(input)})
elif value_type == "image":
input = parse_image_content(
flow_inputs_params[flow_input],
flow_inputs_params[flow_input].type if flow_inputs_params[flow_input] else None
)
flow_inputs_params.update({flow_input: copy(input)})
submit(**flow_inputs_params)
if clear_bt:
with st.spinner("Cleaning..."):
clear_chat()
st.rerun()
if __name__ == "__main__":
with open(Path(__file__).parent / "config.json", 'r') as f:
config = json.load(f)
is_chat_flow = config["is_chat_flow"]
chat_history_input_name = config["chat_history_input_name"]
flow_path = config["flow_path"]
flow_name = config["flow_name"]
flow_inputs = config["flow_inputs"]
label = config["label"]
is_streaming = config["is_streaming"]
chat_output_name = config["chat_output_name"]
start()
| promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py",
"repo_id": "promptflow",
"token_count": 4010
} | 36 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
from ._flow_operations import FlowOperations
from ._run_operations import RunOperations
__all__ = [
"FlowOperations",
"RunOperations",
]
| promptflow/src/promptflow/promptflow/_sdk/operations/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/__init__.py",
"repo_id": "promptflow",
"token_count": 104
} | 37 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import time
from functools import wraps
from typing import Tuple, Type, Union
from requests import Response
from promptflow._utils.logger_utils import LoggerFactory
logger = LoggerFactory.get_logger(__name__)
def retry(exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]], tries=4, delay=3, backoff=2):
"""
From https://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
Retry calling the decorated function using an exponential backoff.
http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
original from: http://wiki.python.org/moin/PythonDecoratorLibrary#Retry
:param exception_to_check: the exception to check. may be a tuple of
exceptions to check
:type exception_to_check: Exception or tuple
:param tries: number of times to try (not retry) before giving up
:type tries: int
:param delay: initial delay between retries in seconds
:type delay: int
:param backoff: backoff multiplier e.g. value of 2 will double the delay
each retry
:type backoff: int
:param logger: log the retry action if specified
:type logger: logging.Logger
"""
def deco_retry(f):
@wraps(f)
def f_retry(*args, **kwargs):
retry_times, delay_seconds = tries, delay
while retry_times > 1:
try:
logger.debug("Running %s, %d more tries to go.", str(f), retry_times)
return f(*args, **kwargs)
except exception_to_check:
time.sleep(delay_seconds)
retry_times -= 1
delay_seconds *= backoff
logger.warning("%s, Retrying in %d seconds...", str(exception_to_check), delay_seconds)
return f(*args, **kwargs)
return f_retry # true decorator
return deco_retry
HTTP_SAFE_CODES = set(range(506)) - {408, 429, 500, 502, 503, 504}
HTTP_RETRY_CODES = set(range(999)) - HTTP_SAFE_CODES
def http_retry_wrapper(f, tries=4, delay=3, backoff=2):
"""
:param f: function to be retried, should return a Response object.
:type f: Callable
:param tries: number of times to try (not retry) before giving up
:type tries: int
:param delay: initial delay between retries in seconds
:type delay: int
:param backoff: backoff multiplier e.g. value of 2 will double the delay
each retry
:type backoff: int
"""
@wraps(f)
def f_retry(*args, **kwargs):
retry_times, delay_seconds = tries, delay
while retry_times > 1:
result = f(*args, **kwargs)
if not isinstance(result, Response):
logger.debug(f"Not a retryable function, expected return type {Response}, got {type(result)}.")
return result
if result.status_code not in HTTP_RETRY_CODES:
return result
logger.warning(
f"Retryable error code {result.status_code} returned, retrying in {delay_seconds} seconds. "
f"Function {f.__name__}, Reason: {result.reason}"
)
time.sleep(delay_seconds)
retry_times -= 1
delay_seconds *= backoff
return f(*args, **kwargs)
return f_retry
| promptflow/src/promptflow/promptflow/_utils/retry_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/retry_utils.py",
"repo_id": "promptflow",
"token_count": 1424
} | 38 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
from os import PathLike
from typing import Dict, List, Optional, Union
from azure.ai.ml import MLClient
from azure.core.credentials import TokenCredential
from promptflow._sdk._constants import MAX_SHOW_DETAILS_RESULTS
from promptflow._sdk._errors import RunOperationParameterError
from promptflow._sdk._user_agent import USER_AGENT
from promptflow._sdk._utils import ClientUserAgentUtil, setup_user_agent_to_operation_context
from promptflow._sdk.entities import Run
from promptflow.azure._restclient.service_caller_factory import _FlowServiceCallerFactory
from promptflow.azure.operations import RunOperations
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
from promptflow.azure.operations._connection_operations import ConnectionOperations
from promptflow.azure.operations._flow_operations import FlowOperations
from promptflow.exceptions import UserErrorException
class PFClient:
"""A client class to interact with Promptflow service.
Use this client to manage promptflow resources, e.g. runs.
:param credential: Credential to use for authentication, optional
:type credential: ~azure.core.credentials.TokenCredential
:param subscription_id: Azure subscription ID, optional for registry assets only, optional
:type subscription_id: typing.Optional[str]
:param resource_group_name: Azure resource group, optional for registry assets only, optional
:type resource_group_name: typing.Optional[str]
:param workspace_name: Workspace to use in the client, optional for non workspace dependent operations only,
optional.
:type workspace_name: typing.Optional[str]
:param kwargs: A dictionary of additional configuration parameters.
:type kwargs: dict
"""
def __init__(
self,
credential: TokenCredential = None,
subscription_id: Optional[str] = None,
resource_group_name: Optional[str] = None,
workspace_name: Optional[str] = None,
**kwargs,
):
self._validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs)
# add user agent from kwargs if any
if isinstance(kwargs.get("user_agent", None), str):
ClientUserAgentUtil.append_user_agent(kwargs["user_agent"])
# append SDK ua to context
user_agent = setup_user_agent_to_operation_context(USER_AGENT)
kwargs.setdefault("user_agent", user_agent)
self._ml_client = kwargs.pop("ml_client", None) or MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
**kwargs,
)
try:
workspace = self._ml_client.workspaces.get(name=self._ml_client._operation_scope.workspace_name)
except Exception as e:
raise UserErrorException(message=str(e), error=e)
self._service_caller = _FlowServiceCallerFactory.get_instance(
workspace=workspace,
credential=self._ml_client._credential,
operation_scope=self._ml_client._operation_scope,
**kwargs,
)
self._flows = FlowOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._runs = RunOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
flow_operations=self._flows,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._connections = ConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
self._arm_connections = ArmConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
@staticmethod
def _validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs):
"""Validate the config information in case wrong parameter name is passed into the constructor."""
sub_name, wrong_sub_name = "subscription_id", "subscription"
rg_name, wrong_rg_name = "resource_group_name", "resource_group"
ws_name, wrong_ws_name = "workspace_name", "workspace"
error_message = (
"You have passed in the wrong parameter name to initialize the PFClient, please use {0!r} instead of {1!r}."
)
if not subscription_id and kwargs.get(wrong_sub_name, None) is not None:
raise RunOperationParameterError(error_message.format(sub_name, wrong_sub_name))
if not resource_group_name and kwargs.get(wrong_rg_name, None) is not None:
raise RunOperationParameterError(error_message.format(rg_name, wrong_rg_name))
if not workspace_name and kwargs.get(wrong_ws_name, None) is not None:
raise RunOperationParameterError(error_message.format(ws_name, wrong_ws_name))
@property
def ml_client(self):
"""Return a client to interact with Azure ML services."""
return self._ml_client
@property
def runs(self):
"""Return the run operation object that can manage runs."""
return self._runs
@property
def flows(self):
"""Return the flow operation object that can manage flows."""
return self._flows
@classmethod
def from_config(
cls,
credential: TokenCredential,
*,
path: Optional[Union[os.PathLike, str]] = None,
file_name=None,
**kwargs,
) -> "PFClient":
"""Return a PFClient object connected to Azure Machine Learning workspace.
Reads workspace configuration from a file. Throws an exception if the config file can't be found.
The method provides a simple way to reuse the same workspace across multiple Python notebooks or projects.
Users can save the workspace Azure Resource Manager (ARM) properties using the
[workspace.write_config](https://aka.ms/ml-workspace-class) method,
and use this method to load the same workspace in different Python notebooks or projects without
retyping the workspace ARM properties.
:param credential: The credential object for the workspace.
:type credential: ~azure.core.credentials.TokenCredential
:param path: The path to the config file or starting directory to search.
The parameter defaults to starting the search in the current directory.
optional
:type path: typing.Union[os.PathLike, str]
:param file_name: Allows overriding the config file name to search for when path is a directory path.
(Default value = None)
:type file_name: str
"""
ml_client = MLClient.from_config(credential=credential, path=path, file_name=file_name, **kwargs)
return PFClient(
ml_client=ml_client,
**kwargs,
)
def run(
self,
flow: Union[str, PathLike],
*,
data: Union[str, PathLike] = None,
run: Union[str, Run] = None,
column_mapping: dict = None,
variant: str = None,
connections: dict = None,
environment_variables: dict = None,
name: str = None,
display_name: str = None,
tags: Dict[str, str] = None,
**kwargs,
) -> Run:
"""Run flow against provided data or run.
.. note:: at least one of data or run must be provided.
.. admonition::
Data can be local file or remote path.
- Example:
- `data = "path/to/local/file"`
- `data = "azureml:data_name:data_version"`
- `data = "azureml://datastores/datastore_name/path/to/file"`
- `data = "https://example.com/data.jsonl"`
Column mapping is a mapping from flow input name to specified values.
If specified, the flow will be executed with provided value for specified inputs.
The value can be:
- from data:
- ``data.col1``
- from run:
- ``run.inputs.col1``: if need reference run's inputs
- ``run.output.col1``: if need reference run's outputs
- Example:
- ``{"ground_truth": "${data.answer}", "prediction": "${run.outputs.answer}"}``
:param flow: path to flow directory to run evaluation
:type flow: Union[str, PathLike]
:param data: pointer to test data (of variant bulk runs) for eval runs
:type data: Union[str, PathLike]
:param run: flow run id or flow run, keep lineage between current run and variant runs,
batch outputs can be referenced as ${run.outputs.col_name} in inputs_mapping
:type run: Union[str, ~promptflow.entities.Run]
:param column_mapping: define a data flow logic to map input data.
:type column_mapping: dict
:param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant
if not specified.
:type variant: str
:param connections: Overwrite node level connections with provided value.
Example: ``{"node1": {"connection": "new_connection", "deployment_name": "gpt-35-turbo"}}``
:type connections: dict
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: ``{"key1": "${my_connection.api_key}", "key2"="value2"}``
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
:type environment_variables: dict
:param name: Name of the run.
:type name: str
:param display_name: Display name of the run.
:type display_name: str
:param tags: Tags of the run.
:type tags: Dict[str, str]
:return: flow run info.
:rtype: ~promptflow.entities.Run
"""
# TODO(2887134): support cloud eager Run CRUD
run = Run(
name=name,
display_name=display_name,
tags=tags,
data=data,
column_mapping=column_mapping,
run=run,
variant=variant,
flow=flow,
connections=connections,
environment_variables=environment_variables,
)
return self.runs.create_or_update(run=run, **kwargs)
def stream(self, run: Union[str, Run], raise_on_error: bool = True) -> Run:
"""Stream run logs to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param raise_on_error: Raises an exception if a run fails or canceled.
:type raise_on_error: bool
:return: flow run info.
"""
if isinstance(run, Run):
run = run.name
return self.runs.stream(run, raise_on_error)
def get_details(
self, run: Union[str, Run], max_results: int = MAX_SHOW_DETAILS_RESULTS, all_results: bool = False
) -> "DataFrame":
"""Get the details from the run including inputs and outputs.
.. note::
If `all_results` is set to True, `max_results` will be overwritten to sys.maxsize.
:param run: The run name or run object
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param max_results: The max number of runs to return, defaults to 100
:type max_results: int
:param all_results: Whether to return all results, defaults to False
:type all_results: bool
:raises RunOperationParameterError: If `max_results` is not a positive integer.
:return: The details data frame.
:rtype: pandas.DataFrame
"""
return self.runs.get_details(run=run, max_results=max_results, all_results=all_results)
def get_metrics(self, run: Union[str, Run]) -> dict:
"""Print run metrics to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:return: The run's metrics
:rtype: dict
"""
if isinstance(run, Run):
run = run.name
return self.runs.get_metrics(run=run)
def visualize(self, runs: Union[List[str], List[Run]]) -> None:
"""Visualize run(s).
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
"""
self.runs.visualize(runs)
| promptflow/src/promptflow/promptflow/azure/_pf_client.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_pf_client.py",
"repo_id": "promptflow",
"token_count": 5475
} | 39 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
import functools
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator_async import distributed_trace_async
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._flow_runs_admin_operations import build_batch_update_service_logs_request, build_check_policy_validation_async_request, build_get_storage_info_request, build_log_flow_run_event_request, build_log_flow_run_event_v2_request, build_log_flow_run_terminated_event_request, build_log_result_for_bulk_run_request, build_send_policy_validation_async_request, build_submit_bulk_run_async_request, build_update_service_logs_request
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class FlowRunsAdminOperations:
"""FlowRunsAdminOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace_async
async def submit_bulk_run_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
error_handling_mode: Optional[Union[str, "_models.ErrorHandlingMode"]] = None,
**kwargs: Any
) -> "_models.SubmitBulkRunResponse":
"""submit_bulk_run_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param error_handling_mode:
:type error_handling_mode: str or ~flow.models.ErrorHandlingMode
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SubmitBulkRunResponse, or the result of cls(response)
:rtype: ~flow.models.SubmitBulkRunResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SubmitBulkRunResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_submit_bulk_run_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
error_handling_mode=error_handling_mode,
template_url=self.submit_bulk_run_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('SubmitBulkRunResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_bulk_run_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/submit'} # type: ignore
@distributed_trace_async
async def send_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""send_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_send_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.send_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
send_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def check_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""check_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_check_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.check_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def log_result_for_bulk_run(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> List["_models.KeyValuePairStringObject"]:
"""log_result_for_bulk_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of KeyValuePairStringObject, or the result of cls(response)
:rtype: list[~flow.models.KeyValuePairStringObject]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.KeyValuePairStringObject"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_result_for_bulk_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.log_result_for_bulk_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[KeyValuePairStringObject]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_result_for_bulk_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/LogResult'} # type: ignore
@distributed_trace_async
async def get_storage_info(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.StorageInfo":
"""get_storage_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: StorageInfo, or the result of cls(response)
:rtype: ~flow.models.StorageInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.StorageInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_storage_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_storage_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('StorageInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_storage_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/storageInfo'} # type: ignore
@distributed_trace_async
async def log_flow_run_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: str,
**kwargs: Any
) -> str:
"""log_flow_run_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/runtime/{runtimeVersion}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_event_v2(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: Optional[str] = None,
**kwargs: Any
) -> str:
"""log_flow_run_event_v2.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_v2_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event_v2.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event_v2.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_terminated_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
last_checked_time: Optional[datetime.datetime] = None,
**kwargs: Any
) -> "_models.LogRunTerminatedEventDto":
"""log_flow_run_terminated_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param last_checked_time:
:type last_checked_time: ~datetime.datetime
:keyword callable cls: A custom type or function that will be passed the direct response
:return: LogRunTerminatedEventDto, or the result of cls(response)
:rtype: ~flow.models.LogRunTerminatedEventDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.LogRunTerminatedEventDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_terminated_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
last_checked_time=last_checked_time,
template_url=self.log_flow_run_terminated_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('LogRunTerminatedEventDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_terminated_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logTerminatedEvent'} # type: ignore
@distributed_trace_async
async def update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional["_models.ServiceLogRequest"] = None,
**kwargs: Any
) -> "_models.Task":
"""update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: ~flow.models.ServiceLogRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'ServiceLogRequest')
else:
_json = None
request = build_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs'} # type: ignore
@distributed_trace_async
async def batch_update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional[List["_models.ServiceLogRequest"]] = None,
**kwargs: Any
) -> "_models.Task":
"""batch_update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: list[~flow.models.ServiceLogRequest]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, '[ServiceLogRequest]')
else:
_json = None
request = build_batch_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.batch_update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
batch_update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs/batch'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py",
"repo_id": "promptflow",
"token_count": 12446
} | 40 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 30