text
stringlengths 8
1.72M
| id
stringlengths 22
143
| metadata
dict | __index_level_0__
int64 0
104
|
---|---|---|---|
# Change log of default runtime image
In Azure Machine Learning prompt flow, the execution of flows is facilitated by using runtimes. Within the Azure Machine Learning workspace, a runtime serves as computing resource that enable customers to execute flows.
A runtime includes a pre-built Docker image (users can also provide their own custom image), which contains all necessary dependency packages.
This Docker image is continuously updated, and here we record the new features and fixed bugs of each image version. The image can be pulled by specifying a runtime version and execute the following command:
```
docker pull mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<runtime_version>
```
You can check the runtime image version from the flow execution log:
![img](../../media/cloud/runtime-change-log/runtime-version.png)
## 20240116.v1
### New features
NA
### Bugs fixed
- Add validation for wrong connection type for LLM tool.
## 20240111.v2
### New features
- Support error log scrubbing for heron jobs.
### Bugs fixed
- Fixed the compatibility issue between runtime and promptflow package < 1.3.0
| promptflow/docs/cloud/azureai/runtime-change-log.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/runtime-change-log.md",
"repo_id": "promptflow",
"token_count": 276
} | 0 |
# Use streaming endpoints deployed from prompt flow
In prompt flow, you can [deploy flow as REST endpoint](./deploy-a-flow/index.md) for real-time inference.
When consuming the endpoint by sending a request, the default behavior is that the online endpoint will keep waiting until the whole response is ready, and then send it back to the client. This can cause a long delay for the client and a poor user experience.
To avoid this, you can use streaming when you consume the endpoints. Once streaming enabled, you don't have to wait for the whole response ready. Instead, the server will send back the response in chunks as they are generated. The client can then display the response progressively, with less waiting time and more interactivity.
This article will describe the scope of streaming, how streaming works, and how to consume streaming endpoints.
## Create a streaming enabled flow
If you want to use the streaming mode, you need to create a flow that has a node that produces a string generator as the flow’s output. A string generator is an object that can return one string at a time when requested. You can use the following types of nodes to create a string generator:
- LLM node: This node uses a large language model to generate natural language responses based on the input.
```jinja
{# Sample prompt template for LLM node #}
system:
You are a helpful assistant.
user:
{{question}}
```
- Python tools node: This node allows you to write custom Python code that can yield string outputs. You can use this node to call external APIs or libraries that support streaming. For example, you can use this code to echo the input word by word:
```python
from promptflow import tool
# Sample code echo input by yield in Python tool node
@tool
def my_python_tool(paragraph: str) -> str:
yield "Echo: "
for word in paragraph.split():
yield word + " "
```
In this guide, we will use the ["Chat with Wikipedia"](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-wikipedia) sample flow as an example. This flow processes the user’s question, searches Wikipedia for relevant articles, and answers the question with information from the articles. It uses streaming mode to show the progress of the answer generation.
![chat_wikipedia.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_center.png)
## Deploy the flow as an online endpoint
To use the streaming mode, you need to deploy your flow as an online endpoint. This will allow you to send requests and receive responses from your flow in real time.
Follow [this guide](./deploy-a-flow/index.md) to deploy your flow as an online endpoint.
> [!NOTE]
>
> You can follow this document to deploy an [online endpoint](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference?view=azureml-api-2).
> Please deploy with runtime environment version later than version `20230816.v10`.
> You can check your runtime version and update runtime in the run time detail page.
## Understand the streaming process
When you have an online endpoint, the client and the server need to follow specific principles for [content negotiation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation) to utilize the streaming mode:
Content negotiation is like a conversation between the client and the server about the preferred format of the data they want to send and receive. It ensures effective communication and agreement on the format of the exchanged data.
To understand the streaming process, consider the following steps:
- First, the client constructs an HTTP request with the desired media type included in the `Accept` header. The media type tells the server what kind of data format the client expects. It's like the client saying, "Hey, I'm looking for a specific format for the data you'll send me. It could be JSON, text, or something else." For example, `application/json` indicates a preference for JSON data, `text/event-stream` indicates a desire for streaming data, and `*/*` means the client accepts any data format.
> [!NOTE]
>
> If a request lacks an `Accept` header or has empty `Accept` header, it implies that the client will accept any media type in response. The server treats it as `*/*`.
- Next, the server responds based on the media type specified in the `Accept` header. It's important to note that the client may request multiple media types in the `Accept` header, and the server must consider its capabilities and format priorities to determine the appropriate response.
- First, the server checks if `text/event-stream` is explicitly specified in the `Accept` header:
- For a stream-enabled flow, the server returns a response with a `Content-Type` of `text/event-stream`, indicating that the data is being streamed.
- For a non-stream-enabled flow, the server proceeds to check for other media types specified in the header.
- If `text/event-stream` is not specified, the server then checks if `application/json` or `*/*` is specified in the `Accept` header:
- In such cases, the server returns a response with a `Content-Type` of `application/json`, providing the data in JSON format.
- If the `Accept` header specifies other media types, such as `text/html`:
- The server returns a `424` response with a PromptFlow runtime error code `UserError` and a runtime HTTP status `406`, indicating that the server cannot fulfill the request with the requested data format.
> Note: Please refer [handle errors](#handle-errors) for details.
- Finally, the client checks the `Content-Type` response header. If it is set to `text/event-stream`, it indicates that the data is being streamed.
Let’s take a closer look at how the streaming process works. The response data in streaming mode follows the format of [server-sent events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events).
The overall process works as follows:
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Hello",
"chat_history": []
}
```
> [!NOTE]
>
> The `Accept` header is set to `text/event-stream` to request a stream response.
### 1. The server sends back the response in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Hello"}
data: {"answer": "!"}
data: {"answer": " How"}
data: {"answer": " can"}
data: {"answer": " I"}
data: {"answer": " assist"}
data: {"answer": " you"}
data: {"answer": " today"}
data: {"answer": " ?"}
data: {"answer": ""}
```
Note that the `Content-Type` is set to `text/event-stream; charset=utf-8`, indicating the response is an event stream.
The client should decode the response data as server-sent events and display them incrementally. The server will close the HTTP connection after all the data is sent.
Each response event is the delta to the previous event. It is recommended for the client to keep track of the merged data in memory and send them back to the server as chat history in the next request.
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Glad to know you!",
"chat_history": [
{
"inputs": {
"question": "Hello"
},
"outputs": {
"answer": "Hello! How can I assist you today?"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Nice"}
data: {"answer": " to"}
data: {"answer": " know"}
data: {"answer": " you"}
data: {"answer": " too"}
data: {"answer": "!"}
data: {"answer": " Is"}
data: {"answer": " there"}
data: {"answer": " anything"}
data: {"answer": " I"}
data: {"answer": " can"}
data: {"answer": " help"}
data: {"answer": " you"}
data: {"answer": " with"}
data: {"answer": "?"}
data: {"answer": ""}
```
### 4. The chat continues in a similar way.
## Handle errors
The client should check the HTTP response code first. See [this table](https://learn.microsoft.com/azure/machine-learning/how-to-troubleshoot-online-endpoints?view=azureml-api-2&tabs=cli#http-status-codes) for common error codes returned by online endpoints.
If the response code is "424 Model Error", it means that the error is caused by the model’s code. The error response from a PromptFlow model always follows this format:
```json
{
"error": {
"code": "UserError",
"message": "Media type text/event-stream in Accept header is not acceptable. Supported media type(s) - application/json",
}
}
```
* It is always a JSON dictionary with only one key "error" defined.
* The value for "error" is a dictionary, containing "code", "message".
* "code" defines the error category. Currently, it may be "UserError" for bad user inputs and "SystemError" for errors inside the service.
* "message" is a description of the error. It can be displayed to the end user.
## How to consume the server-sent events
### Consume using Python
In this sample usage, we are using the `SSEClient` class. This class is not a built-in Python class and needs to be installed separately. You can install it via pip:
```bash
pip install sseclient-py
```
A sample usage would like:
```python
import requests
from sseclient import SSEClient
from requests.exceptions import HTTPError
try:
response = requests.post(url, json=body, headers=headers, stream=stream)
response.raise_for_status()
content_type = response.headers.get('Content-Type')
if "text/event-stream" in content_type:
client = SSEClient(response)
for event in client.events():
# Handle event, i.e. print to stdout
else:
# Handle json response
except HTTPError:
# Handle exceptions
```
### Consume using JavaScript
There are several libraries to consume server-sent events in JavaScript. Here is [one of them as an example](https://www.npmjs.com/package/sse.js?activeTab=code).
## A sample chat app using Python
Here is a sample chat app written in Python.
(Click [here](../media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py) to view the source code.)
![chat_app](../media/how-to-guides/how-to-enable-streaming-mode/chat_app.gif)
## Advance usage - hybrid stream and non-stream flow output
Sometimes, you may want to get both stream and non-stream results from a flow output. For example, in the “Chat with Wikipedia” flow, you may want to get not only LLM’s answer, but also the list of URLs that the flow searched. To do this, you need to modify the flow to output a combination of stream LLM’s answer and non-stream URL list.
In the sample "Chat With Wikipedia" flow, the output is connected to the LLM node `augmented_chat`. To add the URL list to the output, you need to add an output field with the name `url` and the value `${get_wiki_url.output}`.
![chat_wikipedia_dual_output_center.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_dual_output_center.png)
The output of the flow will be a non-stream field as the base and a stream field as the delta. Here is an example of request and response.
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When was ChatGPT launched?",
"chat_history": []
}
```
### 1. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=ChatGPT", "https://en.wikipedia.org/w/index.php?search=GPT-4"]}
data: {"answer": ""}
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " \n\n"}
...
data: {"answer": "PT"}
data: {"answer": ""}
```
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When did OpenAI announce GPT-4? How long is it between these two milestones?",
"chat_history": [
{
"inputs": {
"question": "When was ChatGPT launched?"
},
"outputs": {
"url": [
"https://en.wikipedia.org/w/index.php?search=ChatGPT",
"https://en.wikipedia.org/w/index.php?search=GPT-4"
],
"answer": "ChatGPT was launched on November 30, 2022. \n\nSOURCES: https://en.wikipedia.org/w/index.php?search=ChatGPT"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=Generative pre-trained transformer ", "https://en.wikipedia.org/w/index.php?search=Microsoft "]}
data: {"answer": ""}
data: {"answer": "Open"}
data: {"answer": "AI"}
data: {"answer": " released"}
data: {"answer": " G"}
data: {"answer": "PT"}
data: {"answer": "-"}
data: {"answer": "4"}
data: {"answer": " in"}
data: {"answer": " March"}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "3"}
data: {"answer": "."}
data: {"answer": " Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " The"}
data: {"answer": " time"}
data: {"answer": " between"}
data: {"answer": " these"}
data: {"answer": " two"}
data: {"answer": " milestones"}
data: {"answer": " is"}
data: {"answer": " approximately"}
data: {"answer": " "}
data: {"answer": "3"}
data: {"answer": " months"}
data: {"answer": ".\n\n"}
...
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": ""}
```
| promptflow/docs/how-to-guides/enable-streaming-mode.md/0 | {
"file_path": "promptflow/docs/how-to-guides/enable-streaming-mode.md",
"repo_id": "promptflow",
"token_count": 4715
} | 1 |
# Azure AI Language
Azure AI Language enables users with task-oriented and optimized pre-trained language models to effectively understand documents and conversations. This Prompt flow tool is a wrapper for various Azure AI Language APIs. The current list of supported capabilities is as follows:
| Name | Description |
|-------------------------------------------|-------------------------------------------------------|
| Abstractive Summarization | Generate abstractive summaries from documents. |
| Extractive Summarization | Extract summaries from documents. |
| Conversation Summarization | Summarize conversations. |
| Entity Recognition | Recognize and categorize entities in documents. |
| Key Phrase Extraction | Extract key phrases from documents. |
| Language Detection | Detect the language of documents. |
| PII Entity Recognition | Recognize and redact PII entities in documents. |
| Sentiment Analysis | Analyze the sentiment of documents. |
| Conversational Language Understanding | Predict intents and entities from user's utterances. |
| Translator | Translate documents. |
## Requirements
- For AzureML users:
follow this [wiki](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage?view=azureml-api-2#prepare-runtime), starting from `Prepare runtime`. Note that the PyPi package name is `promptflow-azure-ai-language`.
- For local users:
```
pip install promptflow-azure-ai-language
```
## Prerequisites
The tool calls APIs from Azure AI Language. To use it, you must create a connection to an [Azure AI Language resource](https://learn.microsoft.com/en-us/azure/ai-services/language-service/). Create a Language resource first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Language Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Language Resource endpoint>`
To use the `Translator` tool, you must set up an additional connection to an [Azure AI Translator resource](https://azure.microsoft.com/en-us/products/ai-services/ai-translator). [Create a Translator resource](https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource) first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Translator Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Translator Resource endpoint>`
- If your Translator Resource is regional and non-global, specify its region under `configs` as well: `region: <Azure AI Translator Resource region>`
## Inputs
The tool accepts the following inputs:
- **Abstractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| summary_length | string (enum) | The desired summary length. Enum values are `short`, `medium`, and `long`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Extractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| sentence_count | int | The desired number of output summary sentences. Default value is `3`. | No |
| sort_by | string (enum) | The sorting criteria for extractive summarization results. Enum values are `Offset` to sort results in order of appearance in the text and `Rank` to sort results in order of importance (i.e. rank score) according to model. Default value is `Offset`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversation Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. Text should be of the following form: `<speaker id>: <speaker text> \n <speaker id>: <speaker text> \n ...` | Yes |
| modality | string (enum) | The modality of the input text. Enum values are `text` for input from a text source, and `transcript` for input from a transcript source. | Yes |
| summary_aspect | string (enum) | The desired summary "aspect" to obtain. Enum values are `chapterTitle` to obtain the chapter title of any conversation, `issue` to obtain the summary of issues in transcripts of web chats and service calls between customer-service agents and customers, `narrative` to obtain the generic summary of any conversation, `resolution` to obtain the summary of resolutions in transcripts of web chats and service calls between customer-service agents and customers, `recap` to obtain a general summary, and `follow-up tasks` to obtain a summary of follow-up or action items. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Key Phrase Extraction**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Language Detection**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **PII Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| domain | string (enum) | The PII domain used for PII Entity Recognition. Enum values are `none` for no domain, or `phi` to indicate that entities in the Personal Health domain should be redacted. Default value is `none`. | No |
| categories | list[string] | Describes the PII categories to return. Default value is `[]`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Sentiment Analysis**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| opinion_mining | bool | Should opinion mining be enabled. Default value is `False`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversational Language Understanding**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| utterances | string | A single user utterance or a json array of user utterances. | Yes |
| project_name | string | The Conversational Language Understanding project to be called. | Yes |
| deployment_name | string | The Conversational Language Understanding project deployment to be called. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Translator**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Translator resource. | Yes |
| text | string | The input text. | Yes |
| to | list[string] | The languages to translate the input text to. | Yes |
| source_language | string | The language of the input text. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
## Outputs
If the input parameter `parse_response` is set to `False` (default value), the raw API json output will be returned as a string. Refer to the [REST API reference](https://learn.microsoft.com/en-us/rest/api/language/) for details on API output. For Conversational Language Understanding, the output will be a list of raw API json responses, one response for each user utterance in the input.
When `parse_response` is set to `True`, the tool will parse API output as follows:
| Name | Type | Description |
|-------------------------------------------------------------|--------|---------------------|
| Abstractive Summarization | string | Abstractive summary. |
| Extractive Summarization | list[string] | Extracted summary sentence strings. |
| Conversation Summarization | string | Conversation summary based on `summary_aspect`. |
| Entity Recognition | dict[string, string] | Recognized entities, where keys are entity names and values are entity categories. |
| Key Phrase Extraction | list[string] | Extracted key phrases as strings. |
| Language Detection | string | Detected language's ISO 639-1 code. |
| PII Entity Recognition | string | Input `text` with PII entities redacted. |
| Sentiment Analysis | string | Analyzed sentiment: `positive`, `neutral`, or `negative`. |
| Conversational Language Understanding | list[dict[string, string]] | List of user utterances and associated intents. |
| Translator | dict[string, string] | Translated text, where keys are the translated languages and values are the translated texts. |
| promptflow/docs/integrations/tools/azure-ai-language-tool.md/0 | {
"file_path": "promptflow/docs/integrations/tools/azure-ai-language-tool.md",
"repo_id": "promptflow",
"token_count": 4513
} | 2 |
# SerpAPI
## Introduction
The SerpAPI API is a Python tool that provides a wrapper to the [SerpAPI Google Search Engine Results API](https://serpapi.com/search-api) and [SerpApi Bing Search Engine Results API
](https://serpapi.com/bing-search-api).
We could use the tool to retrieve search results from a number of different search engines, including Google and Bing, and you can specify a range of search parameters, such as the search query, location, device type, and more.
## Prerequisite
Sign up at [SERP API homepage](https://serpapi.com/)
## Connection
Connection is the model used to establish connections with Serp API.
| Type | Name | API KEY |
|-------------|----------|----------|
| Serp | Required | Required |
_**API Key** is on SerpAPI account dashboard_
## Inputs
The **serp api** tool supports following parameters:
| Name | Type | Description | Required |
|----------|---------|---------------------------------------------------------------|----------|
| query | string | The search query to be executed. | Yes |
| engine | string | The search engine to use for the search. Default is 'google'. | Yes |
| num | integer | The number of search results to return.Default is 10. | No |
| location | string | The geographic location to execute the search from. | No |
| safe | string | The safe search mode to use for the search. Default is 'off'. | No |
## Outputs
The json representation from serpapi query.
| Engine | Return Type | Output |
|----------|-------------|-------------------------------------------------------|
| google | json | [Sample](https://serpapi.com/search-api#api-examples) |
| bing | json | [Sample](https://serpapi.com/bing-search-api) |
| promptflow/docs/reference/tools-reference/serp-api-tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/serp-api-tool.md",
"repo_id": "promptflow",
"token_count": 683
} | 3 |
# Chat With Image
This flow demonstrates how to create a chatbot that can take image and text as input.
Tools used in this flow:
- `OpenAI GPT-4V` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow with image and text as input. The chat input should be a list of text and/or images.
## Getting started
### 1 Create connection for OpenAI GPT-4V tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, and create an "OpenAI" connection. If you do not have an OpenAI account, please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> name=aoai_gpt4v_connection api_version=2023-07-01-preview
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_gpt4v_connection`.
```bash
# show registered connection
pf connection show --name aoai_gpt4v_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question='["How many colors can you see?", {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}]'
```
```sh
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| promptflow/examples/flows/chat/chat-with-image/README.md/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-image/README.md",
"repo_id": "promptflow",
"token_count": 526
} | 4 |
import requests
import os
import re
from utils.lock import acquire_lock
from utils.logging import log
from constants import PDF_DIR
# Download a pdf file from a url and return the path to the file
def download(url: str) -> str:
path = os.path.join(PDF_DIR, normalize_filename(url) + ".pdf")
lock_path = path + ".lock"
with acquire_lock(lock_path):
if os.path.exists(path):
log("Pdf already exists in " + os.path.abspath(path))
return path
log("Downloading pdf from " + url)
response = requests.get(url)
with open(path, "wb") as f:
f.write(response.content)
return path
def normalize_filename(filename):
# Replace any invalid characters with an underscore
return re.sub(r"[^\w\-_. ]", "_", filename)
| promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py",
"repo_id": "promptflow",
"token_count": 307
} | 5 |
import os
import unittest
import promptflow
from base_test import BaseTest
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDF(BaseTest):
def setUp(self):
super().setUp()
self.pf = promptflow.PFClient()
def tearDown(self) -> None:
return super().tearDown()
def test_run_chat_with_pdf(self):
result = self.pf.test(
flow=self.flow_path,
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "BERT stands for?",
"config": self.config_2k_context,
},
)
print(result)
self.assertTrue(
result["answer"].find(
"Bidirectional Encoder Representations from Transformers"
)
!= -1
)
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run()
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
}
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_bulk_run_mapping_missing_one_column(self):
data_path = os.path.join(
self.flow_path, "data/invalid-data-missing-column.jsonl"
)
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question}",
},
data=data_path
)
def test_bulk_run_invalid_mapping(self):
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
}
)
if __name__ == "__main__":
unittest.main()
| promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py",
"repo_id": "promptflow",
"token_count": 1637
} | 6 |
# Classification Accuracy Evaluation
This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a "Correct" or "Incorrect" grade, and aggregating the results to produce metrics such as accuracy, which reflects how good the system is at classifying the data.
Tools used in this flow:
- `python` tool
## What you will learn
In this flow, you will learn
- how to compose a point based evaluation flow, where you can calculate point-wise metrics.
- the way to log metrics. use `from promptflow import log_metric`
- see file [calculate_accuracy.py](calculate_accuracy.py)
### 0. Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
### 1. Test flow/node
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs groundtruth=APP prediction=APP
# test node with inputs
pf flow test --flow . --node grade --inputs groundtruth=groundtruth prediction=prediction
```
### 2. create flow run with multi line data
There are two ways to evaluate an classification flow.
```bash
pf run create --flow . --data ./data.jsonl --column-mapping groundtruth='${data.groundtruth}' prediction='${data.prediction}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
### 3. create run against other flow run
Learn more in [web-classification](../../standard/web-classification/README.md)
| promptflow/examples/flows/evaluation/eval-classification-accuracy/README.md/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-classification-accuracy/README.md",
"repo_id": "promptflow",
"token_count": 572
} | 7 |
from promptflow import tool
import re
@tool
def parse_generation_output(rag_generation_score: str) -> str:
quality_score = float('nan')
quality_reasoning = ''
for sent in rag_generation_score.split('\n'):
sent = sent.strip()
if re.match(r"\s*(<)?Quality score:", sent):
numbers_found = re.findall(r"(\d+\.*\d*)\/", sent)
if len(numbers_found) == 0:
continue
quality_score = int(
float(numbers_found[0].replace("'", "")))
for sent in rag_generation_score.split('\n'):
sent = sent.strip()
if re.match(r"\s*(<)?Quality score reasoning:", sent):
quality_reasoning += sent.strip()
break
return {"quality_score": quality_score, "quality_reasoning": quality_reasoning}
| promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_generation_score.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-rag-metrics/parse_generation_score.py",
"repo_id": "promptflow",
"token_count": 358
} | 8 |
from promptflow import tool
@tool
def read_file(file_path: str) -> str:
"""
This tool opens a file and reads its contents into a string.
:param file_path: the file path of the file to be read.
"""
with open(file_path, 'r', encoding="utf8") as f:
file = f.read()
return file
| promptflow/examples/flows/integrations/azure-ai-language/analyze_documents/read_file.py/0 | {
"file_path": "promptflow/examples/flows/integrations/azure-ai-language/analyze_documents/read_file.py",
"repo_id": "promptflow",
"token_count": 114
} | 9 |
import sys
from io import StringIO
import functools
import logging
import ast
from typing import Dict, Optional
logger = logging.getLogger(__name__)
@functools.lru_cache(maxsize=None)
def warn_once() -> None:
# Warn that the PythonREPL
logger.warning("Python REPL can execute arbitrary code. Use with caution.")
COMMAND_EXECUTION_FUNCTIONS = ["system", "exec", "execfile", "eval"]
class PythonValidation:
def __init__(
self,
allow_imports: bool = False,
allow_command_exec: bool = False,
):
"""Initialize a PALValidation instance.
Args:
allow_imports (bool): Allow import statements.
allow_command_exec (bool): Allow using known command execution functions.
"""
self.allow_imports = allow_imports
self.allow_command_exec = allow_command_exec
def validate_code(self, code: str) -> None:
try:
code_tree = ast.parse(code)
except (SyntaxError, UnicodeDecodeError):
raise ValueError(f"Generated code is not valid python code: {code}")
except TypeError:
raise ValueError(
f"Generated code is expected to be a string, "
f"instead found {type(code)}"
)
except OverflowError:
raise ValueError(
f"Generated code too long / complex to be parsed by ast: {code}"
)
has_imports = False
top_level_nodes = list(ast.iter_child_nodes(code_tree))
for node in top_level_nodes:
if isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom):
has_imports = True
if not self.allow_imports and has_imports:
raise ValueError(f"Generated code has disallowed imports: {code}")
if (
not self.allow_command_exec
or not self.allow_imports
):
for node in ast.walk(code_tree):
if (
(not self.allow_command_exec)
and isinstance(node, ast.Call)
and (
(
hasattr(node.func, "id")
and node.func.id in COMMAND_EXECUTION_FUNCTIONS
)
or (
isinstance(node.func, ast.Attribute)
and node.func.attr in COMMAND_EXECUTION_FUNCTIONS
)
)
):
raise ValueError(
f"Found illegal command execution function "
f"{node.func.id} in code {code}"
)
if (not self.allow_imports) and (
isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom)
):
raise ValueError(f"Generated code has disallowed imports: {code}")
class PythonREPL:
"""Simulates a standalone Python REPL."""
def __init__(self) -> None:
self.globals: Optional[Dict] = globals()
self.locals: Optional[Dict] = None
self.code_validations = PythonValidation(allow_imports=True)
def run(self, command: str) -> str:
"""Run command with own globals/locals and returns anything printed."""
# Warn against dangers of PythonREPL
warn_once()
self.code_validations.validate_code(command)
old_stdout = sys.stdout
sys.stdout = my_stdout = StringIO()
try:
exec(command, self.globals, self.locals)
sys.stdout = old_stdout
output = my_stdout.getvalue()
except Exception as e:
sys.stdout = old_stdout
output = repr(e)
print(output)
return output
python_repl = PythonREPL()
def python(command: str):
"""
A Python shell. Use this to execute python commands. Input should be a valid python command.
If you want to see the output of a value, you should print it out with `print(...)`.
"""
command = command.strip().strip("```")
return python_repl.run(command)
| promptflow/examples/flows/standard/autonomous-agent/python_repl.py/0 | {
"file_path": "promptflow/examples/flows/standard/autonomous-agent/python_repl.py",
"repo_id": "promptflow",
"token_count": 1977
} | 10 |
from promptflow import tool
@tool
def default_result(question: str) -> str:
return f"I'm not familiar with your query: {question}."
| promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py/0 | {
"file_path": "promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py",
"repo_id": "promptflow",
"token_count": 42
} | 11 |
*.ipynb
.venv/
.data/
.env
.vscode/
outputs/
connection.json | promptflow/examples/flows/standard/customer-intent-extraction/.amlignore/0 | {
"file_path": "promptflow/examples/flows/standard/customer-intent-extraction/.amlignore",
"repo_id": "promptflow",
"token_count": 30
} | 12 |
# system:
As an AI assistant, your task involves interpreting images and responding to questions about the image.
Remember to provide accurate answers based on the information present in the image.
# user:
{{question}}
![image]({{test_image}})
| promptflow/examples/flows/standard/describe-image/question_on_image.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/describe-image/question_on_image.jinja2",
"repo_id": "promptflow",
"token_count": 59
} | 13 |
{{divided|join('')}} | promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2",
"repo_id": "promptflow",
"token_count": 9
} | 14 |
system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "print(1+1)"
}
user:
This a set of examples including question and the final answer:
{% for ex in examples %}
QUESTION: {{ ex.question }}
CODE:
{{ ex.code }}
{% endfor %}
Now come to the real task, make sure return a valid json. The json should contain a key named "code" and the value is the python code. For example:
{
"code": "print(1+1)"
}
QUESTION: {{ question }}
CODE:
| promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2",
"repo_id": "promptflow",
"token_count": 206
} | 15 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
entity_type:
type: string
default: job title
text:
type: string
default: Maxime is a data scientist at Auto Dataset, and his wife is a finance
manager in the same company.
outputs:
entities:
type: string
reference: ${cleansing.output}
nodes:
- name: NER_LLM
type: llm
source:
type: code
path: NER_LLM.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 64
text: ${inputs.text}
entity_type: ${inputs.entity_type}
connection: open_ai_connection
api: chat
- name: cleansing
type: python
source:
type: code
path: cleansing.py
inputs:
entities_str: ${NER_LLM.output}
environment:
python_requirements_txt: requirements.txt | promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 370
} | 16 |
#!/bin/bash
# <promptflow_install>
pip install -r requirements.txt
# </promptflow_install>
pip list | promptflow/examples/setup.sh/0 | {
"file_path": "promptflow/examples/setup.sh",
"repo_id": "promptflow",
"token_count": 38
} | 17 |
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: My First Tool
description: This is my first tool
type: python
| promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml",
"repo_id": "promptflow",
"token_count": 115
} | 18 |
import pytest
import unittest
from promptflow.contracts.types import FilePath
from my_tool_package.tools.tool_with_file_path_input import my_tool
@pytest.fixture
def my_file_path_input() -> FilePath:
my_file_path_input = FilePath("tests.test_utils.hello_method.py")
return my_file_path_input
class TestToolWithFilePathInput:
def test_tool_with_file_path_input(self, my_file_path_input):
result = my_tool(my_file_path_input, input_text="Microsoft")
assert result == "Hello Microsoft"
# Run the unit tests
if __name__ == "__main__":
unittest.main()
| promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_file_path_input.py/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_file_path_input.py",
"repo_id": "promptflow",
"token_count": 214
} | 19 |
import logging
import os
import subprocess
import sys
import time
import traceback
module_logger = logging.getLogger(__name__)
class Color:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
END = "\033[0m"
def print_red(message):
print(Color.RED + message + Color.END)
def print_blue(message):
print(Color.BLUE + message + Color.END)
def get_test_files(testpath):
if os.path.isfile(testpath):
return [testpath]
else:
res = []
for root, dirs, files in os.walk(testpath):
module_logger.debug("Searching %s for files ending in 'tests.py'", root)
res.extend([os.path.join(root, file) for file in files if file.endswith("tests.py")])
return res
def retry(fn, num_attempts=3):
if num_attempts <= 0:
raise Exception("Illegal num_attempts: {}".format(num_attempts))
count = 0
for _ in range(0, num_attempts):
try:
return fn()
except Exception:
count += 1
print("Execution failed on attempt {} out of {}".format(count, num_attempts))
print("Exception trace:")
traceback.print_exc()
if count == num_attempts:
print("Execution failed after {} attempts".format(count))
raise
def _run_command(
commands,
cwd=None,
stderr=subprocess.STDOUT,
shell=False,
env=None,
stream_stdout=True,
throw_on_retcode=True,
logger=None,
):
if logger is None:
logger = module_logger
if cwd is None:
cwd = os.getcwd()
t0 = time.perf_counter()
try:
logger.debug("[RunCommand]Executing {0} in {1}".format(commands, cwd))
out = ""
p = subprocess.Popen(commands, stdout=subprocess.PIPE, stderr=stderr, cwd=cwd, shell=shell, env=env)
for line in p.stdout:
line = line.decode("utf-8").rstrip()
if line and line.strip():
logger.debug(line)
if stream_stdout:
sys.stdout.write(line)
sys.stdout.write("\n")
out += line
out += "\n"
p.communicate()
retcode = p.poll()
if throw_on_retcode:
if retcode:
raise subprocess.CalledProcessError(retcode, p.args, output=out, stderr=p.stderr)
return retcode, out
finally:
t1 = time.perf_counter()
logger.debug("[RunCommand] Execution took {0}s for {1} in {2}".format(t1 - t0, commands, cwd))
def run_command(
commands, cwd=None, stderr=subprocess.STDOUT, shell=False, stream_stdout=True, throw_on_retcode=True, logger=None
):
return _run_command(
commands,
cwd=cwd,
stderr=stderr,
shell=shell,
stream_stdout=stream_stdout,
throw_on_retcode=throw_on_retcode,
logger=logger,
)
| promptflow/scripts/building/utils.py/0 | {
"file_path": "promptflow/scripts/building/utils.py",
"repo_id": "promptflow",
"token_count": 1477
} | 20 |
#!/usr/bin/env bash
#---------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#---------------------------------------------------------------------------------------------
#
# Bash script to install the prompt flow
#
INSTALL_SCRIPT_URL="https://promptflowartifact.blob.core.windows.net/linux-install-scripts/install.py"
_TTY=/dev/tty
install_script=$(mktemp -t promptflow_install_tmp_XXXXXX) || exit
echo "Downloading prompt flow install script from $INSTALL_SCRIPT_URL to $install_script."
curl -# $INSTALL_SCRIPT_URL > $install_script || exit
python_cmd=python3
if ! command -v python3 >/dev/null 2>&1
then
echo "ERROR: python3 not found."
echo "If python3 is available on the system, add it to PATH."
exit 1
fi
chmod 775 $install_script
echo "Running install script."
$python_cmd $install_script < $_TTY
| promptflow/scripts/installer/curl_install_pypi/install/0 | {
"file_path": "promptflow/scripts/installer/curl_install_pypi/install",
"repo_id": "promptflow",
"token_count": 270
} | 21 |
- name: {{ step_name }}
working-directory: {{ working_dir }}
run: |
AOAI_API_KEY=${{ '{{' }} secrets.AOAI_API_KEY_TEST }}
AOAI_API_ENDPOINT=${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}
AOAI_API_ENDPOINT=$(echo ${AOAI_API_ENDPOINT//\//\\/})
if [[ -e .env.example ]]; then
echo "env replacement"
sed -i -e "s/<your_AOAI_key>/$AOAI_API_KEY/g" -e "s/<your_AOAI_endpoint>/$AOAI_API_ENDPOINT/g" .env.example
mv .env.example .env
fi
| promptflow/scripts/readme/ghactions_driver/workflow_steps/step_create_env.yml.jinja2/0 | {
"file_path": "promptflow/scripts/readme/ghactions_driver/workflow_steps/step_create_env.yml.jinja2",
"repo_id": "promptflow",
"token_count": 237
} | 22 |
# Generate Readme file for the examples folder
import json
from pathlib import Path
import workflow_generator
import readme_generator
from jinja2 import Environment, FileSystemLoader
from ghactions_driver.readme_step import ReadmeStepsManage
from operator import itemgetter
import argparse
import sys
import os
import re
BRANCH = "main"
def get_notebook_readme_description(notebook) -> str:
"""
Set each ipynb metadata description at .metadata.description
"""
try:
# read in notebook
with open(notebook, "r", encoding="utf-8") as f:
data = json.load(f)
return data["metadata"]["description"]
except Exception:
print(f"{notebook} metadata description not set")
return ""
def get_readme_description_first_sentence(readme) -> str:
"""
Get each readme first sentence of first paragraph
"""
try:
with open(readme, "r", encoding="utf-8") as f:
# read first line
line = f.readline()
sentence = ""
while True:
line = f.readline()
if line.startswith("#"):
line = ""
# skip metadata section
if line.startswith("---") or line.startswith("resources"):
line = ""
if line.strip() == "" and sentence != "":
break
elif "." in line:
sentence += " " + line.split(".")[0].strip()
break
else:
if sentence == "":
sentence += line.strip()
elif line.strip() != "":
sentence += " " + line.strip()
return sentence
except Exception:
print(f"Error during reading {readme}")
return ""
def write_readme(workflow_telemetries, readme_telemetries):
global BRANCH
ReadmeStepsManage.git_base_dir()
readme_file = Path(ReadmeStepsManage.git_base_dir()) / "examples/README.md"
quickstarts = {
"readmes": [],
"notebooks": [],
}
tutorials = {
"readmes": [],
"notebooks": [],
}
flows = {
"readmes": [],
"notebooks": [],
}
evaluations = {
"readmes": [],
"notebooks": [],
}
chats = {
"readmes": [],
"notebooks": [],
}
toolusecases = {
"readmes": [],
"notebooks": [],
}
connections = {
"readmes": [],
"notebooks": [],
}
for workflow_telemetry in workflow_telemetries:
notebook_name = f"{workflow_telemetry.name}.ipynb"
gh_working_dir = workflow_telemetry.gh_working_dir
pipeline_name = workflow_telemetry.workflow_name
yaml_name = f"{pipeline_name}.yml"
# For workflows, open ipynb as raw json and
# setup description at .metadata.description
description = get_notebook_readme_description(workflow_telemetry.notebook)
notebook_path = gh_working_dir.replace("examples/", "") + f"/{notebook_name}"
if gh_working_dir.startswith("examples/flows/standard"):
flows["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/connections"):
connections["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/flows/evaluation"):
evaluations["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/tutorials"):
if "quickstart" in notebook_name:
quickstarts["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
tutorials["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/flows/chat"):
chats["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif gh_working_dir.startswith("examples/tools/use-cases"):
toolusecases["notebooks"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
print(f"Unknown workflow type: {gh_working_dir}")
# Adjust tutorial names:
for readme_telemetry in readme_telemetries:
if readme_telemetry.readme_name.endswith("README.md"):
notebook_name = readme_telemetry.readme_folder.split("/")[-1]
else:
notebook_name = readme_telemetry.readme_name.split("/")[-1].replace(
".md", ""
)
notebook_path = readme_telemetry.readme_name.replace("examples/", "")
pipeline_name = readme_telemetry.workflow_name
yaml_name = f"{readme_telemetry.workflow_name}.yml"
description = get_readme_description_first_sentence(
readme_telemetry.readme_name
)
readme_folder = readme_telemetry.readme_folder
if readme_folder.startswith("examples/flows/standard"):
flows["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/connections"):
connections["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/flows/evaluation"):
evaluations["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/tutorials"):
if "quickstart" in notebook_name:
quickstarts["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
tutorials["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/flows/chat"):
chats["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
elif readme_folder.startswith("examples/tools/use-cases"):
toolusecases["readmes"].append(
{
"name": notebook_name,
"path": notebook_path,
"pipeline_name": pipeline_name,
"yaml_name": yaml_name,
"description": description,
}
)
else:
print(f"Unknown workflow type: {readme_folder}")
quickstarts["notebooks"] = sorted(
quickstarts["notebooks"],
key=itemgetter("name"),
reverse=True,
)
replacement = {
"branch": BRANCH,
"tutorials": tutorials,
"flows": flows,
"evaluations": evaluations,
"chats": chats,
"toolusecases": toolusecases,
"connections": connections,
"quickstarts": quickstarts,
}
print("writing README.md...")
env = Environment(
loader=FileSystemLoader(
Path(ReadmeStepsManage.git_base_dir())
/ "scripts/readme/ghactions_driver/readme_templates"
)
)
template = env.get_template("README.md.jinja2")
with open(readme_file, "w") as f:
f.write(template.render(replacement))
print("finished writing README.md")
def main(check):
if check:
# Disable print
sys.stdout = open(os.devnull, "w")
input_glob = ["examples/**/*.ipynb"]
workflow_telemetry = []
workflow_generator.main(input_glob, workflow_telemetry, check=check)
input_glob_readme = [
"examples/flows/**/README.md",
"examples/connections/**/README.md",
"examples/tutorials/e2e-development/*.md",
"examples/tutorials/flow-fine-tuning-evaluation/*.md",
"examples/tutorials/**/README.md",
"examples/tools/use-cases/**/README.md",
]
# exclude the readme since this is 3p integration folder, pipeline generation is not included
input_glob_readme_exclude = ["examples/flows/integrations/**/README.md"]
readme_telemetry = []
readme_generator.main(
input_glob_readme, input_glob_readme_exclude, readme_telemetry
)
write_readme(workflow_telemetry, readme_telemetry)
if check:
output_object = {}
for workflow in workflow_telemetry:
workflow_items = re.split(r"\[|,| |\]", workflow.path_filter)
workflow_items = list(filter(None, workflow_items))
output_object[workflow.workflow_name] = []
for item in workflow_items:
if item == "examples/*requirements.txt":
output_object[workflow.workflow_name].append(
"examples/requirements.txt"
)
output_object[workflow.workflow_name].append(
"examples/dev_requirements.txt"
)
continue
output_object[workflow.workflow_name].append(item)
for readme in readme_telemetry:
output_object[readme.workflow_name] = []
readme_items = re.split(r"\[|,| |\]", readme.path_filter)
readme_items = list(filter(None, readme_items))
for item in readme_items:
if item == "examples/*requirements.txt":
output_object[readme.workflow_name].append(
"examples/requirements.txt"
)
output_object[readme.workflow_name].append(
"examples/dev_requirements.txt"
)
continue
output_object[readme.workflow_name].append(item)
# enable output
sys.stdout = sys.__stdout__
return output_object
else:
return ""
if __name__ == "__main__":
# setup argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"-c", "--check", action="store_true", help="Check what file is affected"
)
args = parser.parse_args()
output = main(args.check)
print(json.dumps(output))
| promptflow/scripts/readme/readme.py/0 | {
"file_path": "promptflow/scripts/readme/readme.py",
"repo_id": "promptflow",
"token_count": 7045
} | 23 |
include {{ package_name }}/yamls/*.yaml | promptflow/scripts/tool/templates/MANIFEST.in.j2/0 | {
"file_path": "promptflow/scripts/tool/templates/MANIFEST.in.j2",
"repo_id": "promptflow",
"token_count": 14
} | 24 |
import inspect
from enum import Enum, EnumMeta
from typing import Callable, Union, get_args, get_origin
from promptflow.contracts.tool import ConnectionType, InputDefinition, ValueType, ToolType
from promptflow.contracts.types import PromptTemplate
def value_to_str(val):
if val is inspect.Parameter.empty:
# For empty case, default field will be skipped when dumping to json
return None
if val is None:
# Dump default: "" in json to avoid UI validation error
return ""
if isinstance(val, Enum):
return val.value
return str(val)
def resolve_annotation(anno) -> Union[str, list]:
"""Resolve the union annotation to type list."""
origin = get_origin(anno)
if origin != Union:
return anno
# Optional[Type] is Union[Type, NoneType], filter NoneType out
args = [arg for arg in get_args(anno) if arg != type(None)] # noqa: E721
return args[0] if len(args) == 1 else args
def param_to_definition(param, value_type) -> (InputDefinition, bool):
default_value = param.default
enum = None
custom_type = None
# Get value type and enum from default if no annotation
if default_value is not inspect.Parameter.empty and value_type == inspect.Parameter.empty:
value_type = default_value.__class__ if isinstance(default_value, Enum) else type(default_value)
# Extract enum for enum class
if isinstance(value_type, EnumMeta):
enum = [str(option.value) for option in value_type]
value_type = str
is_connection = False
if ConnectionType.is_connection_value(value_type):
if ConnectionType.is_custom_strong_type(value_type):
typ = ["CustomConnection"]
custom_type = [value_type.__name__]
else:
typ = [value_type.__name__]
is_connection = True
elif isinstance(value_type, list):
if not all(ConnectionType.is_connection_value(t) for t in value_type):
typ = [ValueType.OBJECT]
else:
custom_connection_added = False
typ = []
custom_type = []
for t in value_type:
if ConnectionType.is_custom_strong_type(t):
if not custom_connection_added:
custom_connection_added = True
typ.append("CustomConnection")
custom_type.append(t.__name__)
else:
typ.append(t.__name__)
is_connection = True
else:
typ = [ValueType.from_type(value_type)]
return InputDefinition(type=typ, default=value_to_str(default_value),
description=None, enum=enum, custom_type=custom_type), is_connection
def function_to_interface(f: Callable, tool_type, initialize_inputs=None) -> tuple:
sign = inspect.signature(f)
all_inputs = {}
input_defs = {}
connection_types = []
# Initialize the counter for prompt template
prompt_template_count = 0
# Collect all inputs from class and func
if initialize_inputs:
if any(k for k in initialize_inputs if k in sign.parameters):
raise Exception(f'Duplicate inputs found from {f.__name__!r} and "__init__()"!')
all_inputs = {**initialize_inputs}
all_inputs.update(
{
k: v
for k, v in sign.parameters.items()
if k != "self" and v.kind != v.VAR_KEYWORD and v.kind != v.VAR_POSITIONAL # TODO: Handle these cases
}
)
# Resolve inputs to definitions.
for k, v in all_inputs.items():
# Get value type from annotation
value_type = resolve_annotation(v.annotation)
if value_type is PromptTemplate:
# custom llm tool has prompt template as input, skip it
prompt_template_count += 1
continue
input_def, is_connection = param_to_definition(v, value_type)
input_defs[k] = input_def
if is_connection:
connection_types.append(input_def.type)
# Check PromptTemplate input:
# a. For custom llm tool, there should be exactly one PromptTemplate input
# b. For python tool, PromptTemplate input is not supported
if tool_type == ToolType.PYTHON and prompt_template_count > 0:
raise Exception(f"Input of type 'PromptTemplate' not supported in python tool '{f.__name__}'. ")
if tool_type == ToolType.CUSTOM_LLM and prompt_template_count == 0:
raise Exception(f"No input of type 'PromptTemplate' was found in custom llm tool '{f.__name__}'. ")
if tool_type == ToolType.CUSTOM_LLM and prompt_template_count > 1:
raise Exception(f"Multiple inputs of type 'PromptTemplate' were found in '{f.__name__}'. "
"Only one input of this type is expected.")
outputs = {}
# Note: We don't have output definition now
# outputs = {"output": OutputDefinition("output", [ValueType.from_type(type(sign.return_annotation))], "", True)}
# if is_dataclass(sign.return_annotation):
# for f in fields(sign.return_annotation):
# outputs[f.name] = OutputDefinition(f.name, [ValueType.from_type(
# type(getattr(sign.return_annotation, f.name)))], "", False)
return input_defs, outputs, connection_types
| promptflow/scripts/tool/utils/tool_utils.py/0 | {
"file_path": "promptflow/scripts/tool/utils/tool_utils.py",
"repo_id": "promptflow",
"token_count": 2128
} | 25 |
import json
import os
import pytest
import sys
from pathlib import Path
from pytest_mock import MockerFixture # noqa: E402
from tests.utils import verify_url_exists
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import ConnectionManager
from promptflow.connections import CustomConnection, OpenAIConnection, SerpConnection
from promptflow.contracts.multimedia import Image
from promptflow.tools.aoai import AzureOpenAI
PROMOTFLOW_ROOT = Path(__file__).absolute().parents[1]
CONNECTION_FILE = (PROMOTFLOW_ROOT / "connections.json").resolve().absolute().as_posix()
root_str = str(PROMOTFLOW_ROOT.resolve().absolute())
if root_str not in sys.path:
sys.path.insert(0, root_str)
# connection
@pytest.fixture(autouse=True)
def use_secrets_config_file(mocker: MockerFixture):
mocker.patch.dict(os.environ, {"PROMPTFLOW_CONNECTIONS": CONNECTION_FILE})
@pytest.fixture
def azure_open_ai_connection():
return ConnectionManager().get("azure_open_ai_connection")
@pytest.fixture
def aoai_provider(azure_open_ai_connection) -> AzureOpenAI:
aoai_provider = AzureOpenAI(azure_open_ai_connection)
return aoai_provider
@pytest.fixture
def open_ai_connection():
return ConnectionManager().get("open_ai_connection")
@pytest.fixture
def serp_connection():
return ConnectionManager().get("serp_connection")
def verify_om_llm_custom_connection(connection: CustomConnection) -> bool:
'''Verify that there is a MIR endpoint up and available for the Custom Connection.
We explicitly do not pass the endpoint key to avoid the delay in generating a response.
'''
return verify_url_exists(connection.configs['endpoint_url'])
@pytest.fixture
def gpt2_custom_connection():
return ConnectionManager().get("gpt2_connection")
@pytest.fixture
def open_model_llm_ws_service_connection() -> bool:
try:
creds_custom_connection: CustomConnection = ConnectionManager().get("open_source_llm_ws_service_connection")
subs = json.loads(creds_custom_connection.secrets['service_credential'])
for key, value in subs.items():
os.environ[key] = value
return True
except Exception as e:
print(f"""Something failed setting environment variables for service credentials.
Error: {e}""")
return False
@pytest.fixture(autouse=True)
def skip_if_no_api_key(request, mocker):
mocker.patch.dict(os.environ, {"PROMPTFLOW_CONNECTIONS": CONNECTION_FILE})
if request.node.get_closest_marker('skip_if_no_api_key'):
conn_name = request.node.get_closest_marker('skip_if_no_api_key').args[0]
connection = request.getfixturevalue(conn_name)
# if dummy placeholder key, skip.
if isinstance(connection, OpenAIConnection) or isinstance(connection, SerpConnection):
if "-api-key" in connection.api_key:
pytest.skip('skipped because no key')
elif isinstance(connection, CustomConnection):
if "endpoint_api_key" not in connection.secrets or "-api-key" in connection.secrets["endpoint_api_key"]:
pytest.skip('skipped because no key')
# Verify Custom Connections, but only those used by the Open_Model_LLM Tool
if "endpoint_url" in connection.configs and "-endpoint-url" not in connection.configs["endpoint_url"]:
if not verify_om_llm_custom_connection(connection):
pytest.skip('skipped because the connection is not valid')
# example prompts
@pytest.fixture
def example_prompt_template() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/marketing_writer/prompt.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_prompt_template_with_name_in_roles() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_name_in_roles.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def chat_history() -> list:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/marketing_writer/history.json") as f:
history = json.load(f)
return history
@pytest.fixture
def example_prompt_template_with_function() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_function.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_prompt_template_with_image() -> str:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/prompt_with_image.jinja2") as f:
prompt_template = f.read()
return prompt_template
@pytest.fixture
def example_image() -> Image:
with open(PROMOTFLOW_ROOT / "tests/test_configs/prompt_templates/images/number10.jpg", "rb") as f:
image = Image(f.read())
return image
# functions
@pytest.fixture
def functions():
return [
{
"name": "get_current_weather",
"parameters": {
"type": "object",
"properties": {},
},
}
]
@pytest.fixture
def azure_content_safety_connection():
return ConnectionManager().get("azure_content_safety_connection")
| promptflow/src/promptflow-tools/tests/conftest.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/conftest.py",
"repo_id": "promptflow",
"token_count": 2009
} | 26 |
import pytest
from promptflow.tools.openai_gpt4v import OpenAI
@pytest.fixture
def openai_provider(open_ai_connection) -> OpenAI:
return OpenAI(open_ai_connection)
@pytest.mark.usefixtures("use_secrets_config_file")
@pytest.mark.skip_if_no_api_key("open_ai_connection")
class TestOpenAIGPT4V:
def test_openai_gpt4v_chat(self, openai_provider, example_prompt_template_with_image, example_image):
result = openai_provider.chat(
prompt=example_prompt_template_with_image,
model="gpt-4-vision-preview",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
assert "10" == result
def test_openai_gpt4v_stream_chat(self, openai_provider, example_prompt_template_with_image, example_image):
result = openai_provider.chat(
prompt=example_prompt_template_with_image,
model="gpt-4-vision-preview",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
answer = ""
while True:
try:
answer += next(result)
except Exception:
break
assert "10" == result
| promptflow/src/promptflow-tools/tests/test_openai_gpt4v.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/test_openai_gpt4v.py",
"repo_id": "promptflow",
"token_count": 626
} | 27 |
import argparse
import json
from promptflow._cli._params import add_param_set_positional, base_params
from promptflow._cli._utils import activate_action, list_of_dict_to_dict
from promptflow._sdk._configuration import Configuration, InvalidConfigValue
from promptflow._sdk._utils import print_red_error
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
def add_config_set(subparsers):
epilog = """
Examples:
# Config connection provider to azure workspace for current user:
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
""" # noqa: E501
activate_action(
name="set",
description="Set prompt flow configs for current user.",
epilog=epilog,
add_params=[add_param_set_positional] + base_params,
subparsers=subparsers,
help_message="Set prompt flow configs for current user, configs will be stored at ~/.promptflow/pf.yaml.",
action_param_name="sub_action",
)
def add_config_show(subparsers):
epilog = """
Examples:
# Show prompt flow for current user:
pf config show
"""
activate_action(
name="show",
description="Show prompt flow configs for current user.",
epilog=epilog,
add_params=base_params,
subparsers=subparsers,
help_message="Show prompt flow configs for current user.",
action_param_name="sub_action",
)
def add_config_parser(subparsers):
config_parser = subparsers.add_parser(
"config", description="A CLI tool to set prompt flow configs for current user.", help="pf config"
)
subparsers = config_parser.add_subparsers()
add_config_set(subparsers)
add_config_show(subparsers)
config_parser.set_defaults(action="config")
def dispatch_config_commands(args: argparse.Namespace):
if args.sub_action == "set":
set_config(args)
if args.sub_action == "show":
show_config()
def set_config(args):
params_override = list_of_dict_to_dict(args.params_override)
for k, v in params_override.items():
logger.debug("Setting config %s to %s", k, v)
try:
Configuration.get_instance().set_config(k, v)
print(f"Set config {args.params_override} successfully.")
except InvalidConfigValue as e:
error_message = f"Invalid config value {v!r} for {k!r}: {str(e)}"
print_red_error(error_message)
def show_config():
configs = Configuration.get_instance().get_all()
print(json.dumps(configs, indent=4))
| promptflow/src/promptflow/promptflow/_cli/_pf/_config.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_config.py",
"repo_id": "promptflow",
"token_count": 1038
} | 28 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from promptflow._version import VERSION
USER_AGENT = "{}/{}".format("promptflow-cli", VERSION)
| promptflow/src/promptflow/promptflow/_cli/_user_agent.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_user_agent.py",
"repo_id": "promptflow",
"token_count": 56
} | 29 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
groundtruth:
type: string
prediction:
type: string
outputs:
results:
type: string
reference: ${line_process.output}
nodes:
- name: line_process
type: python
source:
type: code
path: line_process.py
inputs:
groundtruth: ${inputs.groundtruth}
prediction: ${inputs.prediction}
- name: aggregate
type: python
source:
type: code
path: aggregate.py
inputs:
processed_results: ${line_process.output}
aggregation: true
environment:
python_requirements_txt: requirements.txt
| promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/evaluation_flow/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 225
} | 30 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import hashlib
import json
from dataclasses import dataclass
from typing import Callable, List
from promptflow._utils.logger_utils import flow_logger
from promptflow.contracts.run_info import RunInfo
from promptflow.storage import AbstractCacheStorage, AbstractRunStorage
PROMPTFLOW_HASH_ATTR = "__promptflow_hash_func"
def get_calculate_cache_func(tool_func):
return getattr(tool_func, PROMPTFLOW_HASH_ATTR, None)
def set_calculate_cache_func(tool_func, calculate_cache_func):
setattr(tool_func, PROMPTFLOW_HASH_ATTR, calculate_cache_func)
def enable_cache(calculate_cache_func):
def decorator_enable_cache(func):
set_calculate_cache_func(func, calculate_cache_func)
return func
return decorator_enable_cache
@dataclass
class CacheInfo:
hash_id: str = None
cache_string: str = None
@dataclass
class CacheResult:
result: object = None
cached_run_id: str = None
cached_flow_run_id: str = None
hit_cache: bool = False
class AbstractCacheManager:
@staticmethod
def init_from_env() -> "AbstractCacheManager":
# TODO: Return CacheManager after local execution is enabled.
return DummyCacheManager()
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
raise NotImplementedError("AbstractCacheManager has not implemented method calculate_cache_info.")
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
raise NotImplementedError("AbstractCacheManager has not implemented method get_cache_result.")
def persist_result(self, run_info: RunInfo, hash_id: str, cache_string: str, flow_id: str):
raise NotImplementedError("AbstractCacheManager has not implemented method persist_result.")
class DummyCacheManager(AbstractCacheManager):
def __init__(self):
pass
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
return None
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
return None
def persist_result(self, run_info: RunInfo, hash_id: str, cache_string: str, flow_id: str):
pass
class CacheManager(AbstractCacheManager):
def __init__(self, run_storage: AbstractRunStorage, cache_storage: AbstractCacheStorage):
self._run_storage = run_storage
self._cache_storage = cache_storage
def calculate_cache_info(self, flow_id: str, tool_method: Callable, args, kwargs) -> CacheInfo:
cache_function = get_calculate_cache_func(tool_method)
# Cache function is not registered with this tool.
if cache_function is None:
return None
# Calculate cache string and hash id.
try:
cache_string = cache_function(*args, **kwargs)
except Exception as ex:
flow_logger.warning(f"Failed to calculate cache string. Exception: {ex}")
return None
# Add flow_id and tool_name in the cache string.
# So that different flow_id and tool_name cannot reuse.
other_cache_string = json.dumps(
{
"flow_id": flow_id,
"tool_name": tool_method.__qualname__,
}
)
cache_string += other_cache_string
hash_id = self._calculate_hash_id(cache_string)
return CacheInfo(hash_id=hash_id, cache_string=cache_string)
def get_cache_result(self, cache_info: CacheInfo) -> CacheResult:
hash_id = cache_info.hash_id
# Query if cache result existed by hash_id.
cache_result_list: List[CacheInfo] = self._cache_storage.get_cache_record_list(hash_id=hash_id)
if len(cache_result_list) == 0:
return None
# Get the latest cache result.
cache_result = sorted(cache_result_list, reverse=True, key=lambda i: i.end_time)[0]
try:
cached_run_info = self._run_storage.get_node_run(cache_result.run_id)
except Exception as ex:
flow_logger.warning(
f"Failed to get cached run result. \
Run id:{cached_run_info.run_id}, flow run id: {cached_run_info.flow_run_id} \
Exception: {ex}"
)
return None
flow_logger.info(
f"Hit cached result of previous run: run id: \
{cached_run_info.run_id}, flow run id: {cached_run_info.flow_run_id}"
)
return CacheResult(
result=cached_run_info.result,
cached_run_id=cached_run_info.run_id,
cached_flow_run_id=cached_run_info.flow_run_id,
hit_cache=True,
)
def persist_result(self, run_info: RunInfo, cache_info: CacheInfo, flow_id: str):
self._cache_storage.persist_cache_result(run_info, cache_info.hash_id, cache_info.cache_string, flow_id)
@staticmethod
def _calculate_hash_id(cache_string: str):
return hashlib.sha1(cache_string.encode("utf-8")).hexdigest()
| promptflow/src/promptflow/promptflow/_core/cache_manager.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/cache_manager.py",
"repo_id": "promptflow",
"token_count": 2075
} | 31 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
try:
from flask_restx import Api, Namespace, Resource, fields # noqa: F401
except ImportError as ex:
from promptflow.exceptions import UserErrorException
raise UserErrorException(f"Please try 'pip install promptflow[pfs]' to install dependency, {ex.msg}.")
| promptflow/src/promptflow/promptflow/_sdk/_service/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/__init__.py",
"repo_id": "promptflow",
"token_count": 138
} | 32 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from flask import Blueprint, current_app as app, request
from promptflow._sdk._serving.monitor.flow_monitor import FlowMonitor
def is_monitoring_enabled() -> bool:
enabled = False
if request.endpoint in app.view_functions:
view_func = app.view_functions[request.endpoint]
enabled = hasattr(view_func, "_enable_monitoring")
return enabled
def construct_monitor_blueprint(flow_monitor: FlowMonitor):
"""Construct monitor blueprint."""
monitor_blueprint = Blueprint("monitor_blueprint", __name__)
@monitor_blueprint.before_app_request
def start_monitoring():
if not is_monitoring_enabled():
return
flow_monitor.start_monitoring()
@monitor_blueprint.after_app_request
def finish_monitoring(response):
if not is_monitoring_enabled():
return response
flow_monitor.finish_monitoring(response.status_code)
return response
return monitor_blueprint
| promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/monitor_blueprint.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/blueprint/monitor_blueprint.py",
"repo_id": "promptflow",
"token_count": 365
} | 33 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import logging
from promptflow.contracts.flow import Flow, FlowInputDefinition, FlowOutputDefinition
from promptflow.contracts.tool import ValueType
type_mapping = {
ValueType.INT: "integer",
ValueType.DOUBLE: "number",
ValueType.BOOL: "boolean",
ValueType.STRING: "string",
ValueType.LIST: "array",
ValueType.OBJECT: "object",
ValueType.IMAGE: "object", # Dump as object as portal test page can't handle image now
}
def generate_input_field_schema(input: FlowInputDefinition) -> dict:
field_schema = {"type": type_mapping[input.type]}
if input.description:
field_schema["description"] = input.description
if input.default:
field_schema["default"] = input.default
if input.type == ValueType.OBJECT:
field_schema["additionalProperties"] = {}
if input.type == ValueType.LIST:
field_schema["items"] = {"type": "object", "additionalProperties": {}}
return field_schema
def generate_output_field_schema(output: FlowOutputDefinition) -> dict:
field_schema = {"type": type_mapping[output.type]}
if output.description:
field_schema["description"] = output.description
if output.type == ValueType.OBJECT:
field_schema["additionalProperties"] = {}
if output.type == ValueType.LIST:
field_schema["items"] = {"type": "object", "additionalProperties": {}}
return field_schema
def generate_swagger(flow: Flow, samples, outputs_to_remove: list) -> dict:
"""convert a flow to swagger object."""
swagger = {"openapi": "3.0.0"}
swagger["info"] = {
"title": f"Promptflow[{flow.name}] API",
"version": "1.0.0",
"x-flow-name": str(flow.name),
}
swagger["components"] = {
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
}
}
}
swagger["security"] = [{"bearerAuth": []}]
input_schema = {"type": "object"}
request_body_required = False
if len(flow.inputs) > 0:
input_schema["properties"] = {}
input_schema["required"] = []
request_body_required = True
for name, input in flow.inputs.items():
if input.is_chat_input:
swagger["info"]["x-chat-input"] = name
swagger["info"]["x-flow-type"] = "chat"
if input.is_chat_history:
swagger["info"]["x-chat-history"] = name
input_schema["properties"][name] = generate_input_field_schema(input)
input_schema["required"].append(name)
output_schema = {"type": "object"}
if len(flow.outputs) > 0:
output_schema["properties"] = {}
for name, output in flow.outputs.items():
# skip evaluation only outputs in swagger
# TODO remove this if sdk removed this evaluation_only field
if output.evaluation_only:
continue
if output.is_chat_output:
swagger["info"]["x-chat-output"] = name
if outputs_to_remove and name in outputs_to_remove:
continue
output_schema["properties"][name] = generate_output_field_schema(output)
example = {}
if samples:
if isinstance(samples, list):
example = samples[0]
else:
logging.warning("samples should be a list of dict, but got %s, skipped.", type(samples))
swagger["paths"] = {
"/score": {
"post": {
"summary": f"run promptflow: {flow.name} with an given input",
"requestBody": {
"description": "promptflow input data",
"required": request_body_required,
"content": {
"application/json": {
"schema": input_schema,
"example": example, # need to check this based on the sample data
}
},
},
"responses": {
"200": {
"description": "successful operation",
"content": {
"application/json": {
"schema": output_schema,
}
},
},
"400": {
"description": "Invalid input",
},
"default": {
"description": "unexpected error",
},
},
}
}
}
return swagger
| promptflow/src/promptflow/promptflow/_sdk/_serving/swagger.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/swagger.py",
"repo_id": "promptflow",
"token_count": 2285
} | 34 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import shutil
import tempfile
import webbrowser
from dataclasses import asdict
from pathlib import Path
from typing import Optional
from promptflow._sdk._constants import VIS_HTML_TMPL, VIS_JS_BUNDLE_FILENAME
from promptflow._sdk._utils import render_jinja_template
from promptflow.contracts._run_management import VisualizationRender
def generate_html_string(data: dict) -> str:
visualization_render = VisualizationRender(data=data)
return render_jinja_template(VIS_HTML_TMPL, **asdict(visualization_render))
def try_to_open_html(html_path: str) -> None:
print(f"The HTML file is generated at {str(Path(html_path).resolve().absolute())!r}.")
print("Trying to view the result in a web browser...")
web_browser_opened = False
web_browser_opened = webbrowser.open(f"file://{html_path}")
if not web_browser_opened:
print(
f"Failed to visualize from the web browser, the HTML file locates at {html_path!r}.\n"
"You can manually open it with your web browser, or try SDK to visualize it."
)
else:
print("Successfully visualized from the web browser.")
def dump_js_bundle(html_path: str) -> None:
js_bundle_src_path = Path(__file__).parent / "data" / VIS_JS_BUNDLE_FILENAME
js_bundle_dst_path = Path(html_path).parent / VIS_JS_BUNDLE_FILENAME
shutil.copy(js_bundle_src_path, js_bundle_dst_path)
def dump_html(html_string: str, html_path: Optional[str] = None, open_html: bool = True) -> None:
if html_path is not None:
with open(html_path, "w") as f:
f.write(html_string)
else:
with tempfile.NamedTemporaryFile(prefix="pf-visualize-detail-", suffix=".html", delete=False) as f:
f.write(html_string.encode("utf-8"))
html_path = f.name
dump_js_bundle(html_path)
if open_html:
try_to_open_html(html_path)
| promptflow/src/promptflow/promptflow/_sdk/_visualize_functions.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_visualize_functions.py",
"repo_id": "promptflow",
"token_count": 751
} | 35 |
import json
import os
from pathlib import Path
from PIL import Image
import streamlit as st
from streamlit_quill import st_quill
from copy import copy
from types import GeneratorType
import time
from promptflow import load_flow
from promptflow._sdk._utils import dump_flow_result
from promptflow._utils.multimedia_utils import convert_multimedia_data_to_base64, persist_multimedia_data
from promptflow._sdk._submitter.utils import get_result_output, resolve_generator
from utils import dict_iter_render_message, parse_list_from_html, parse_image_content, render_single_dict_message
invoker = None
generator_record = {}
def start():
def clear_chat() -> None:
st.session_state.messages = []
def render_message(role, message_items):
with st.chat_message(role):
if is_chat_flow:
render_single_dict_message(message_items)
else:
dict_iter_render_message(message_items)
def show_conversation() -> None:
if "messages" not in st.session_state:
st.session_state.messages = []
st.session_state.history = []
if st.session_state.messages:
for role, message_items in st.session_state.messages:
render_message(role, message_items)
def get_chat_history_from_session():
if "history" in st.session_state:
return st.session_state.history
return []
def post_process_dump_result(response, session_state_history):
response = resolve_generator(response, generator_record)
# Get base64 for multi modal object
resolved_outputs = {
k: convert_multimedia_data_to_base64(v, with_type=True, dict_type=True)
for k, v in response.output.items()
}
st.session_state.messages.append(("assistant", resolved_outputs))
session_state_history.update({"outputs": response.output})
st.session_state.history.append(session_state_history)
if is_chat_flow:
dump_path = Path(flow_path).parent
response.output = persist_multimedia_data(
response.output, base_dir=dump_path, sub_dir=Path(".promptflow/output")
)
dump_flow_result(flow_folder=dump_path, flow_result=response, prefix="chat")
return resolved_outputs
def submit(**kwargs) -> None:
st.session_state.messages.append(("user", kwargs))
session_state_history = dict()
session_state_history.update({"inputs": kwargs})
with container:
render_message("user", kwargs)
# Force append chat history to kwargs
if is_chat_flow:
response = run_flow({chat_history_input_name: get_chat_history_from_session(), **kwargs})
else:
response = run_flow(kwargs)
if is_streaming:
# Display assistant response in chat message container
with container:
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = f"{chat_output_name}:"
chat_output = response.output[chat_output_name]
if isinstance(chat_output, GeneratorType):
# Simulate stream of response with milliseconds delay
for chunk in get_result_output(chat_output, generator_record):
full_response += chunk + " "
time.sleep(0.05)
# Add a blinking cursor to simulate typing
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
post_process_dump_result(response, session_state_history)
return
resolved_outputs = post_process_dump_result(response, session_state_history)
with container:
render_message("assistant", resolved_outputs)
def run_flow(data: dict) -> dict:
global invoker
if not invoker:
if flow_path:
flow = Path(flow_path)
else:
flow = Path(__file__).parent / "flow"
if flow.is_dir():
os.chdir(flow)
else:
os.chdir(flow.parent)
invoker = load_flow(flow)
invoker.context.streaming = is_streaming
result = invoker.invoke(data)
return result
image = Image.open(Path(__file__).parent / "logo.png")
st.set_page_config(
layout="wide",
page_title=f"{flow_name} - Promptflow App",
page_icon=image,
menu_items={
'About': """
# This is a Promptflow App.
You can refer to [promptflow](https://github.com/microsoft/promptflow) for more information.
"""
}
)
# Set primary button color here since button color of the same form need to be identical in streamlit, but we only
# need Run/Chat button to be blue.
st.config.set_option("theme.primaryColor", "#0F6CBD")
st.title(flow_name)
st.divider()
st.chat_message("assistant").write("Hello, please input following flow inputs.")
container = st.container()
with container:
show_conversation()
with st.form(key='input_form', clear_on_submit=True):
settings_path = os.path.join(os.path.dirname(__file__), "settings.json")
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as file:
json_data = json.load(file)
environment_variables = list(json_data.keys())
for environment_variable in environment_variables:
secret_input = st.sidebar.text_input(label=environment_variable, type="password",
placeholder=f"Please input {environment_variable} here. "
f"If you input before, you can leave it blank.")
if secret_input != "":
os.environ[environment_variable] = secret_input
flow_inputs_params = {}
for flow_input, (default_value, value_type) in flow_inputs.items():
if value_type == "list":
st.text(flow_input)
input = st_quill(html=True, toolbar=["image"], key=flow_input,
placeholder='Please enter the list values and use the image icon to upload a picture. '
'Make sure to format each list item correctly with line breaks')
elif value_type == "image":
input = st.file_uploader(label=flow_input)
elif value_type == "string":
input = st.text_input(label=flow_input, placeholder=default_value)
else:
input = st.text_input(label=flow_input, placeholder=default_value)
flow_inputs_params.update({flow_input: copy(input)})
cols = st.columns(7)
submit_bt = cols[0].form_submit_button(label=label, type='primary')
clear_bt = cols[1].form_submit_button(label='Clear')
if submit_bt:
with st.spinner("Loading..."):
for flow_input, (default_value, value_type) in flow_inputs.items():
if value_type == "list":
input = parse_list_from_html(flow_inputs_params[flow_input])
flow_inputs_params.update({flow_input: copy(input)})
elif value_type == "image":
input = parse_image_content(
flow_inputs_params[flow_input],
flow_inputs_params[flow_input].type if flow_inputs_params[flow_input] else None
)
flow_inputs_params.update({flow_input: copy(input)})
submit(**flow_inputs_params)
if clear_bt:
with st.spinner("Cleaning..."):
clear_chat()
st.rerun()
if __name__ == "__main__":
with open(Path(__file__).parent / "config.json", 'r') as f:
config = json.load(f)
is_chat_flow = config["is_chat_flow"]
chat_history_input_name = config["chat_history_input_name"]
flow_path = config["flow_path"]
flow_name = config["flow_name"]
flow_inputs = config["flow_inputs"]
label = config["label"]
is_streaming = config["is_streaming"]
chat_output_name = config["chat_output_name"]
start()
| promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/executable/main.py",
"repo_id": "promptflow",
"token_count": 4010
} | 36 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
from ._flow_operations import FlowOperations
from ._run_operations import RunOperations
__all__ = [
"FlowOperations",
"RunOperations",
]
| promptflow/src/promptflow/promptflow/_sdk/operations/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/__init__.py",
"repo_id": "promptflow",
"token_count": 104
} | 37 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import time
from functools import wraps
from typing import Tuple, Type, Union
from requests import Response
from promptflow._utils.logger_utils import LoggerFactory
logger = LoggerFactory.get_logger(__name__)
def retry(exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]], tries=4, delay=3, backoff=2):
"""
From https://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
Retry calling the decorated function using an exponential backoff.
http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
original from: http://wiki.python.org/moin/PythonDecoratorLibrary#Retry
:param exception_to_check: the exception to check. may be a tuple of
exceptions to check
:type exception_to_check: Exception or tuple
:param tries: number of times to try (not retry) before giving up
:type tries: int
:param delay: initial delay between retries in seconds
:type delay: int
:param backoff: backoff multiplier e.g. value of 2 will double the delay
each retry
:type backoff: int
:param logger: log the retry action if specified
:type logger: logging.Logger
"""
def deco_retry(f):
@wraps(f)
def f_retry(*args, **kwargs):
retry_times, delay_seconds = tries, delay
while retry_times > 1:
try:
logger.debug("Running %s, %d more tries to go.", str(f), retry_times)
return f(*args, **kwargs)
except exception_to_check:
time.sleep(delay_seconds)
retry_times -= 1
delay_seconds *= backoff
logger.warning("%s, Retrying in %d seconds...", str(exception_to_check), delay_seconds)
return f(*args, **kwargs)
return f_retry # true decorator
return deco_retry
HTTP_SAFE_CODES = set(range(506)) - {408, 429, 500, 502, 503, 504}
HTTP_RETRY_CODES = set(range(999)) - HTTP_SAFE_CODES
def http_retry_wrapper(f, tries=4, delay=3, backoff=2):
"""
:param f: function to be retried, should return a Response object.
:type f: Callable
:param tries: number of times to try (not retry) before giving up
:type tries: int
:param delay: initial delay between retries in seconds
:type delay: int
:param backoff: backoff multiplier e.g. value of 2 will double the delay
each retry
:type backoff: int
"""
@wraps(f)
def f_retry(*args, **kwargs):
retry_times, delay_seconds = tries, delay
while retry_times > 1:
result = f(*args, **kwargs)
if not isinstance(result, Response):
logger.debug(f"Not a retryable function, expected return type {Response}, got {type(result)}.")
return result
if result.status_code not in HTTP_RETRY_CODES:
return result
logger.warning(
f"Retryable error code {result.status_code} returned, retrying in {delay_seconds} seconds. "
f"Function {f.__name__}, Reason: {result.reason}"
)
time.sleep(delay_seconds)
retry_times -= 1
delay_seconds *= backoff
return f(*args, **kwargs)
return f_retry
| promptflow/src/promptflow/promptflow/_utils/retry_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/retry_utils.py",
"repo_id": "promptflow",
"token_count": 1424
} | 38 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
from os import PathLike
from typing import Dict, List, Optional, Union
from azure.ai.ml import MLClient
from azure.core.credentials import TokenCredential
from promptflow._sdk._constants import MAX_SHOW_DETAILS_RESULTS
from promptflow._sdk._errors import RunOperationParameterError
from promptflow._sdk._user_agent import USER_AGENT
from promptflow._sdk._utils import ClientUserAgentUtil, setup_user_agent_to_operation_context
from promptflow._sdk.entities import Run
from promptflow.azure._restclient.service_caller_factory import _FlowServiceCallerFactory
from promptflow.azure.operations import RunOperations
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
from promptflow.azure.operations._connection_operations import ConnectionOperations
from promptflow.azure.operations._flow_operations import FlowOperations
from promptflow.exceptions import UserErrorException
class PFClient:
"""A client class to interact with Promptflow service.
Use this client to manage promptflow resources, e.g. runs.
:param credential: Credential to use for authentication, optional
:type credential: ~azure.core.credentials.TokenCredential
:param subscription_id: Azure subscription ID, optional for registry assets only, optional
:type subscription_id: typing.Optional[str]
:param resource_group_name: Azure resource group, optional for registry assets only, optional
:type resource_group_name: typing.Optional[str]
:param workspace_name: Workspace to use in the client, optional for non workspace dependent operations only,
optional.
:type workspace_name: typing.Optional[str]
:param kwargs: A dictionary of additional configuration parameters.
:type kwargs: dict
"""
def __init__(
self,
credential: TokenCredential = None,
subscription_id: Optional[str] = None,
resource_group_name: Optional[str] = None,
workspace_name: Optional[str] = None,
**kwargs,
):
self._validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs)
# add user agent from kwargs if any
if isinstance(kwargs.get("user_agent", None), str):
ClientUserAgentUtil.append_user_agent(kwargs["user_agent"])
# append SDK ua to context
user_agent = setup_user_agent_to_operation_context(USER_AGENT)
kwargs.setdefault("user_agent", user_agent)
self._ml_client = kwargs.pop("ml_client", None) or MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
**kwargs,
)
try:
workspace = self._ml_client.workspaces.get(name=self._ml_client._operation_scope.workspace_name)
except Exception as e:
raise UserErrorException(message=str(e), error=e)
self._service_caller = _FlowServiceCallerFactory.get_instance(
workspace=workspace,
credential=self._ml_client._credential,
operation_scope=self._ml_client._operation_scope,
**kwargs,
)
self._flows = FlowOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._runs = RunOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
flow_operations=self._flows,
service_caller=self._service_caller,
workspace=workspace,
**kwargs,
)
self._connections = ConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
self._arm_connections = ArmConnectionOperations(
operation_scope=self._ml_client._operation_scope,
operation_config=self._ml_client._operation_config,
all_operations=self._ml_client._operation_container,
credential=self._ml_client._credential,
service_caller=self._service_caller,
**kwargs,
)
@staticmethod
def _validate_config_information(subscription_id, resource_group_name, workspace_name, kwargs):
"""Validate the config information in case wrong parameter name is passed into the constructor."""
sub_name, wrong_sub_name = "subscription_id", "subscription"
rg_name, wrong_rg_name = "resource_group_name", "resource_group"
ws_name, wrong_ws_name = "workspace_name", "workspace"
error_message = (
"You have passed in the wrong parameter name to initialize the PFClient, please use {0!r} instead of {1!r}."
)
if not subscription_id and kwargs.get(wrong_sub_name, None) is not None:
raise RunOperationParameterError(error_message.format(sub_name, wrong_sub_name))
if not resource_group_name and kwargs.get(wrong_rg_name, None) is not None:
raise RunOperationParameterError(error_message.format(rg_name, wrong_rg_name))
if not workspace_name and kwargs.get(wrong_ws_name, None) is not None:
raise RunOperationParameterError(error_message.format(ws_name, wrong_ws_name))
@property
def ml_client(self):
"""Return a client to interact with Azure ML services."""
return self._ml_client
@property
def runs(self):
"""Return the run operation object that can manage runs."""
return self._runs
@property
def flows(self):
"""Return the flow operation object that can manage flows."""
return self._flows
@classmethod
def from_config(
cls,
credential: TokenCredential,
*,
path: Optional[Union[os.PathLike, str]] = None,
file_name=None,
**kwargs,
) -> "PFClient":
"""Return a PFClient object connected to Azure Machine Learning workspace.
Reads workspace configuration from a file. Throws an exception if the config file can't be found.
The method provides a simple way to reuse the same workspace across multiple Python notebooks or projects.
Users can save the workspace Azure Resource Manager (ARM) properties using the
[workspace.write_config](https://aka.ms/ml-workspace-class) method,
and use this method to load the same workspace in different Python notebooks or projects without
retyping the workspace ARM properties.
:param credential: The credential object for the workspace.
:type credential: ~azure.core.credentials.TokenCredential
:param path: The path to the config file or starting directory to search.
The parameter defaults to starting the search in the current directory.
optional
:type path: typing.Union[os.PathLike, str]
:param file_name: Allows overriding the config file name to search for when path is a directory path.
(Default value = None)
:type file_name: str
"""
ml_client = MLClient.from_config(credential=credential, path=path, file_name=file_name, **kwargs)
return PFClient(
ml_client=ml_client,
**kwargs,
)
def run(
self,
flow: Union[str, PathLike],
*,
data: Union[str, PathLike] = None,
run: Union[str, Run] = None,
column_mapping: dict = None,
variant: str = None,
connections: dict = None,
environment_variables: dict = None,
name: str = None,
display_name: str = None,
tags: Dict[str, str] = None,
**kwargs,
) -> Run:
"""Run flow against provided data or run.
.. note:: at least one of data or run must be provided.
.. admonition::
Data can be local file or remote path.
- Example:
- `data = "path/to/local/file"`
- `data = "azureml:data_name:data_version"`
- `data = "azureml://datastores/datastore_name/path/to/file"`
- `data = "https://example.com/data.jsonl"`
Column mapping is a mapping from flow input name to specified values.
If specified, the flow will be executed with provided value for specified inputs.
The value can be:
- from data:
- ``data.col1``
- from run:
- ``run.inputs.col1``: if need reference run's inputs
- ``run.output.col1``: if need reference run's outputs
- Example:
- ``{"ground_truth": "${data.answer}", "prediction": "${run.outputs.answer}"}``
:param flow: path to flow directory to run evaluation
:type flow: Union[str, PathLike]
:param data: pointer to test data (of variant bulk runs) for eval runs
:type data: Union[str, PathLike]
:param run: flow run id or flow run, keep lineage between current run and variant runs,
batch outputs can be referenced as ${run.outputs.col_name} in inputs_mapping
:type run: Union[str, ~promptflow.entities.Run]
:param column_mapping: define a data flow logic to map input data.
:type column_mapping: dict
:param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant
if not specified.
:type variant: str
:param connections: Overwrite node level connections with provided value.
Example: ``{"node1": {"connection": "new_connection", "deployment_name": "gpt-35-turbo"}}``
:type connections: dict
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: ``{"key1": "${my_connection.api_key}", "key2"="value2"}``
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
:type environment_variables: dict
:param name: Name of the run.
:type name: str
:param display_name: Display name of the run.
:type display_name: str
:param tags: Tags of the run.
:type tags: Dict[str, str]
:return: flow run info.
:rtype: ~promptflow.entities.Run
"""
# TODO(2887134): support cloud eager Run CRUD
run = Run(
name=name,
display_name=display_name,
tags=tags,
data=data,
column_mapping=column_mapping,
run=run,
variant=variant,
flow=flow,
connections=connections,
environment_variables=environment_variables,
)
return self.runs.create_or_update(run=run, **kwargs)
def stream(self, run: Union[str, Run], raise_on_error: bool = True) -> Run:
"""Stream run logs to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param raise_on_error: Raises an exception if a run fails or canceled.
:type raise_on_error: bool
:return: flow run info.
"""
if isinstance(run, Run):
run = run.name
return self.runs.stream(run, raise_on_error)
def get_details(
self, run: Union[str, Run], max_results: int = MAX_SHOW_DETAILS_RESULTS, all_results: bool = False
) -> "DataFrame":
"""Get the details from the run including inputs and outputs.
.. note::
If `all_results` is set to True, `max_results` will be overwritten to sys.maxsize.
:param run: The run name or run object
:type run: Union[str, ~promptflow.sdk.entities.Run]
:param max_results: The max number of runs to return, defaults to 100
:type max_results: int
:param all_results: Whether to return all results, defaults to False
:type all_results: bool
:raises RunOperationParameterError: If `max_results` is not a positive integer.
:return: The details data frame.
:rtype: pandas.DataFrame
"""
return self.runs.get_details(run=run, max_results=max_results, all_results=all_results)
def get_metrics(self, run: Union[str, Run]) -> dict:
"""Print run metrics to the console.
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
:return: The run's metrics
:rtype: dict
"""
if isinstance(run, Run):
run = run.name
return self.runs.get_metrics(run=run)
def visualize(self, runs: Union[List[str], List[Run]]) -> None:
"""Visualize run(s).
:param run: Run object or name of the run.
:type run: Union[str, ~promptflow.sdk.entities.Run]
"""
self.runs.visualize(runs)
| promptflow/src/promptflow/promptflow/azure/_pf_client.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_pf_client.py",
"repo_id": "promptflow",
"token_count": 5475
} | 39 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
import functools
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator_async import distributed_trace_async
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._flow_runs_admin_operations import build_batch_update_service_logs_request, build_check_policy_validation_async_request, build_get_storage_info_request, build_log_flow_run_event_request, build_log_flow_run_event_v2_request, build_log_flow_run_terminated_event_request, build_log_result_for_bulk_run_request, build_send_policy_validation_async_request, build_submit_bulk_run_async_request, build_update_service_logs_request
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class FlowRunsAdminOperations:
"""FlowRunsAdminOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace_async
async def submit_bulk_run_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
error_handling_mode: Optional[Union[str, "_models.ErrorHandlingMode"]] = None,
**kwargs: Any
) -> "_models.SubmitBulkRunResponse":
"""submit_bulk_run_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param error_handling_mode:
:type error_handling_mode: str or ~flow.models.ErrorHandlingMode
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SubmitBulkRunResponse, or the result of cls(response)
:rtype: ~flow.models.SubmitBulkRunResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SubmitBulkRunResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_submit_bulk_run_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
error_handling_mode=error_handling_mode,
template_url=self.submit_bulk_run_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('SubmitBulkRunResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_bulk_run_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/submit'} # type: ignore
@distributed_trace_async
async def send_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""send_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_send_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.send_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
send_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def check_policy_validation_async(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> "_models.PolicyValidationResponse":
"""check_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_check_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.check_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace_async
async def log_result_for_bulk_run(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
**kwargs: Any
) -> List["_models.KeyValuePairStringObject"]:
"""log_result_for_bulk_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of KeyValuePairStringObject, or the result of cls(response)
:rtype: list[~flow.models.KeyValuePairStringObject]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.KeyValuePairStringObject"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_result_for_bulk_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.log_result_for_bulk_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[KeyValuePairStringObject]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_result_for_bulk_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/LogResult'} # type: ignore
@distributed_trace_async
async def get_storage_info(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.StorageInfo":
"""get_storage_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: StorageInfo, or the result of cls(response)
:rtype: ~flow.models.StorageInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.StorageInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_storage_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_storage_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('StorageInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_storage_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/storageInfo'} # type: ignore
@distributed_trace_async
async def log_flow_run_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: str,
**kwargs: Any
) -> str:
"""log_flow_run_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/runtime/{runtimeVersion}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_event_v2(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
runtime_version: Optional[str] = None,
**kwargs: Any
) -> str:
"""log_flow_run_event_v2.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_v2_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event_v2.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event_v2.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logEvent'} # type: ignore
@distributed_trace_async
async def log_flow_run_terminated_event(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
flow_run_id: str,
last_checked_time: Optional[datetime.datetime] = None,
**kwargs: Any
) -> "_models.LogRunTerminatedEventDto":
"""log_flow_run_terminated_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param last_checked_time:
:type last_checked_time: ~datetime.datetime
:keyword callable cls: A custom type or function that will be passed the direct response
:return: LogRunTerminatedEventDto, or the result of cls(response)
:rtype: ~flow.models.LogRunTerminatedEventDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.LogRunTerminatedEventDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_terminated_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
last_checked_time=last_checked_time,
template_url=self.log_flow_run_terminated_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('LogRunTerminatedEventDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_terminated_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logTerminatedEvent'} # type: ignore
@distributed_trace_async
async def update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional["_models.ServiceLogRequest"] = None,
**kwargs: Any
) -> "_models.Task":
"""update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: ~flow.models.ServiceLogRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'ServiceLogRequest')
else:
_json = None
request = build_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs'} # type: ignore
@distributed_trace_async
async def batch_update_service_logs(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
flow_id: str,
bulk_run_id: str,
body: Optional[List["_models.ServiceLogRequest"]] = None,
**kwargs: Any
) -> "_models.Task":
"""batch_update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: list[~flow.models.ServiceLogRequest]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, '[ServiceLogRequest]')
else:
_json = None
request = build_batch_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.batch_update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
batch_update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs/batch'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_runs_admin_operations.py",
"repo_id": "promptflow",
"token_count": 12446
} | 40 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_submit_bulk_run_async_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
error_handling_mode = kwargs.pop('error_handling_mode', None) # type: Optional[Union[str, "_models.ErrorHandlingMode"]]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/submit')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if error_handling_mode is not None:
query_parameters['errorHandlingMode'] = _SERIALIZER.query("error_handling_mode", error_handling_mode, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_send_policy_validation_async_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_check_policy_validation_async_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_log_result_for_bulk_run_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/LogResult')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_storage_info_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/storageInfo')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_log_flow_run_event_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
runtime_version, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/runtime/{runtimeVersion}/logEvent')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
"runtimeVersion": _SERIALIZER.url("runtime_version", runtime_version, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_log_flow_run_event_v2_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
runtime_version = kwargs.pop('runtime_version', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logEvent')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if runtime_version is not None:
query_parameters['runtimeVersion'] = _SERIALIZER.query("runtime_version", runtime_version, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_log_flow_run_terminated_event_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
last_checked_time = kwargs.pop('last_checked_time', None) # type: Optional[datetime.datetime]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logTerminatedEvent')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if last_checked_time is not None:
query_parameters['lastCheckedTime'] = _SERIALIZER.query("last_checked_time", last_checked_time, 'iso-8601')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_update_service_logs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_batch_update_service_logs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs/batch')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkRunId": _SERIALIZER.url("bulk_run_id", bulk_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
# fmt: on
class FlowRunsAdminOperations(object):
"""FlowRunsAdminOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def submit_bulk_run_async(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
error_handling_mode=None, # type: Optional[Union[str, "_models.ErrorHandlingMode"]]
**kwargs # type: Any
):
# type: (...) -> "_models.SubmitBulkRunResponse"
"""submit_bulk_run_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param error_handling_mode:
:type error_handling_mode: str or ~flow.models.ErrorHandlingMode
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SubmitBulkRunResponse, or the result of cls(response)
:rtype: ~flow.models.SubmitBulkRunResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SubmitBulkRunResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_submit_bulk_run_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
error_handling_mode=error_handling_mode,
template_url=self.submit_bulk_run_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('SubmitBulkRunResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_bulk_run_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/submit'} # type: ignore
@distributed_trace
def send_policy_validation_async(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.PolicyValidationResponse"
"""send_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_send_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.send_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
send_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace
def check_policy_validation_async(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.PolicyValidationResponse"
"""check_policy_validation_async.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: PolicyValidationResponse, or the result of cls(response)
:rtype: ~flow.models.PolicyValidationResponse
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.PolicyValidationResponse"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_check_policy_validation_async_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.check_policy_validation_async.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('PolicyValidationResponse', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_policy_validation_async.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/policy'} # type: ignore
@distributed_trace
def log_result_for_bulk_run(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.KeyValuePairStringObject"]
"""log_result_for_bulk_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of KeyValuePairStringObject, or the result of cls(response)
:rtype: list[~flow.models.KeyValuePairStringObject]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.KeyValuePairStringObject"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_result_for_bulk_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
template_url=self.log_result_for_bulk_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[KeyValuePairStringObject]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_result_for_bulk_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/LogResult'} # type: ignore
@distributed_trace
def get_storage_info(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.StorageInfo"
"""get_storage_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: StorageInfo, or the result of cls(response)
:rtype: ~flow.models.StorageInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.StorageInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_storage_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_storage_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('StorageInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_storage_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/storageInfo'} # type: ignore
@distributed_trace
def log_flow_run_event(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
runtime_version, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""log_flow_run_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/runtime/{runtimeVersion}/logEvent'} # type: ignore
@distributed_trace
def log_flow_run_event_v2(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
runtime_version=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> str
"""log_flow_run_event_v2.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param runtime_version:
:type runtime_version: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_event_v2_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
runtime_version=runtime_version,
template_url=self.log_flow_run_event_v2.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_event_v2.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logEvent'} # type: ignore
@distributed_trace
def log_flow_run_terminated_event(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
last_checked_time=None, # type: Optional[datetime.datetime]
**kwargs # type: Any
):
# type: (...) -> "_models.LogRunTerminatedEventDto"
"""log_flow_run_terminated_event.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param last_checked_time:
:type last_checked_time: ~datetime.datetime
:keyword callable cls: A custom type or function that will be passed the direct response
:return: LogRunTerminatedEventDto, or the result of cls(response)
:rtype: ~flow.models.LogRunTerminatedEventDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.LogRunTerminatedEventDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_log_flow_run_terminated_event_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
last_checked_time=last_checked_time,
template_url=self.log_flow_run_terminated_event.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('LogRunTerminatedEventDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
log_flow_run_terminated_event.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/flowRuns/{flowRunId}/logTerminatedEvent'} # type: ignore
@distributed_trace
def update_service_logs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
body=None, # type: Optional["_models.ServiceLogRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.Task"
"""update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: ~flow.models.ServiceLogRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'ServiceLogRequest')
else:
_json = None
request = build_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs'} # type: ignore
@distributed_trace
def batch_update_service_logs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_run_id, # type: str
body=None, # type: Optional[List["_models.ServiceLogRequest"]]
**kwargs # type: Any
):
# type: (...) -> "_models.Task"
"""batch_update_service_logs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_run_id:
:type bulk_run_id: str
:param body:
:type body: list[~flow.models.ServiceLogRequest]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Task, or the result of cls(response)
:rtype: ~flow.models.Task
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Task"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, '[ServiceLogRequest]')
else:
_json = None
request = build_batch_update_service_logs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_run_id=bulk_run_id,
content_type=content_type,
json=_json,
template_url=self.batch_update_service_logs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('Task', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
batch_update_service_logs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowRunsAdmin/{flowId}/bulkRuns/{bulkRunId}/serviceLogs/batch'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_runs_admin_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_runs_admin_operations.py",
"repo_id": "promptflow",
"token_count": 18701
} | 41 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import jwt
from promptflow.exceptions import ValidationException
def is_arm_id(obj) -> bool:
return isinstance(obj, str) and obj.startswith("azureml://")
def get_token(credential, resource) -> str:
from azure.ai.ml._azure_environments import _resource_to_scopes
azure_ml_scopes = _resource_to_scopes(resource)
token = credential.get_token(*azure_ml_scopes).token
# validate token has aml audience
decoded_token = jwt.decode(
token,
options={"verify_signature": False, "verify_aud": False},
)
if decoded_token.get("aud") != resource:
msg = """AAD token with aml scope could not be fetched using the credentials being used.
Please validate if token with {0} scope can be fetched using credentials provided to PFClient.
Token with {0} scope can be fetched using credentials.get_token({0})
"""
raise ValidationException(
message=msg.format(*azure_ml_scopes),
)
return token
def get_aml_token(credential) -> str:
from azure.ai.ml._azure_environments import _get_aml_resource_id_from_metadata
resource = _get_aml_resource_id_from_metadata()
return get_token(credential, resource)
def get_arm_token(credential) -> str:
from azure.ai.ml._azure_environments import _get_base_url_from_metadata
resource = _get_base_url_from_metadata()
return get_token(credential, resource)
def get_authorization(credential=None) -> str:
token = get_arm_token(credential=credential)
return "Bearer " + token
def get_user_alias_from_credential(credential):
token = get_arm_token(credential=credential)
decode_json = jwt.decode(token, options={"verify_signature": False, "verify_aud": False})
try:
email = decode_json.get("upn", decode_json.get("email", None))
return email.split("@")[0]
except Exception:
# use oid when failed to get upn, e.g. service principal
return decode_json["oid"]
| promptflow/src/promptflow/promptflow/azure/_utils/gerneral.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_utils/gerneral.py",
"repo_id": "promptflow",
"token_count": 783
} | 42 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
from typing import Any, List, Mapping, Optional
from promptflow._core._errors import UnexpectedError
from promptflow._core.operation_context import OperationContext
from promptflow._core.run_tracker import RunTracker
from promptflow._utils.logger_utils import bulk_logger
from promptflow.batch._base_executor_proxy import AbstractExecutorProxy
from promptflow.contracts.run_mode import RunMode
from promptflow.executor import FlowExecutor
from promptflow.executor._line_execution_process_pool import LineExecutionProcessPool
from promptflow.executor._result import AggregationResult, LineResult
from promptflow.executor._script_executor import ScriptExecutor
from promptflow.storage._run_storage import AbstractRunStorage
class PythonExecutorProxy(AbstractExecutorProxy):
def __init__(self, flow_executor: FlowExecutor):
self._flow_executor = flow_executor
@classmethod
async def create(
cls,
flow_file: Path,
working_dir: Optional[Path] = None,
*,
connections: Optional[dict] = None,
entry: Optional[str] = None,
storage: Optional[AbstractRunStorage] = None,
**kwargs,
) -> "PythonExecutorProxy":
flow_executor = FlowExecutor.create(
flow_file, connections, working_dir, entry=entry, storage=storage, raise_ex=False
)
return cls(flow_executor)
async def exec_aggregation_async(
self,
batch_inputs: Mapping[str, Any],
aggregation_inputs: Mapping[str, Any],
run_id: Optional[str] = None,
) -> AggregationResult:
with self._flow_executor._run_tracker.node_log_manager:
return self._flow_executor._exec_aggregation(batch_inputs, aggregation_inputs, run_id=run_id)
def _exec_batch(
self,
batch_inputs: List[Mapping[str, Any]],
output_dir: Path,
run_id: Optional[str] = None,
batch_timeout_sec: Optional[int] = None,
line_timeout_sec: Optional[int] = None,
) -> List[LineResult]:
# TODO: Refine the logic here since the script executor actually doesn't have the 'node' concept
if isinstance(self._flow_executor, ScriptExecutor):
run_tracker = RunTracker(self._flow_executor._storage)
else:
run_tracker = self._flow_executor._run_tracker
with run_tracker.node_log_manager:
OperationContext.get_instance().run_mode = RunMode.Batch.name
if self._flow_executor._flow_file is None:
raise UnexpectedError(
"Unexpected error occurred while init FlowExecutor. Error details: flow file is missing."
)
if batch_timeout_sec:
bulk_logger.info(f"The timeout for the batch run is {batch_timeout_sec} seconds.")
with LineExecutionProcessPool(
self._flow_executor,
len(batch_inputs),
run_id,
output_dir,
batch_timeout_sec=batch_timeout_sec,
line_timeout_sec=line_timeout_sec,
) as pool:
line_number = [batch_input["line_number"] for batch_input in batch_inputs]
line_results = pool.run(zip(line_number, batch_inputs))
# For bulk run, currently we need to add line results to run_tracker
self._flow_executor._add_line_results(line_results, run_tracker)
return line_results
def get_inputs_definition(self):
return self._flow_executor.get_inputs_definition()
@classmethod
def _get_tool_metadata(cls, flow_file: Path, working_dir: Path) -> dict:
from promptflow._sdk._utils import generate_flow_tools_json
return generate_flow_tools_json(
flow_directory=working_dir,
dump=False,
used_packages_only=True,
)
| promptflow/src/promptflow/promptflow/batch/_python_executor_proxy.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/batch/_python_executor_proxy.py",
"repo_id": "promptflow",
"token_count": 1619
} | 43 |
import os
from dataclasses import dataclass
from functools import partial
from pathlib import Path
from typing import Callable, Dict, Optional
from promptflow.contracts.flow import InputAssignment, Node, ToolSource
from promptflow.contracts.tool import ToolType
from promptflow.exceptions import ErrorTarget
from promptflow.executor._docstring_parser import DocstringParser
from promptflow.executor._errors import UnsupportedAssistantToolType
from promptflow.executor._tool_resolver import ToolResolver
@dataclass
class AssistantTool:
name: str
openai_definition: dict
func: Callable
class AssistantToolInvoker:
def __init__(self, working_dir: Optional[Path] = None):
self._working_dir = working_dir or Path(os.getcwd())
self._assistant_tools: Dict[str, AssistantTool] = {}
@classmethod
def init(cls, tools: list, working_dir: Optional[Path] = None):
invoker = cls(working_dir=working_dir)
invoker._load_tools(tools)
return invoker
def _load_tools(self, tools: list):
for tool in tools:
if tool["type"] in ("code_interpreter", "retrieval"):
self._assistant_tools[tool["type"]] = AssistantTool(
name=tool["type"], openai_definition=tool, func=None
)
elif tool["type"] == "function":
function_tool = self._load_tool_as_function(tool)
self._assistant_tools[function_tool.name] = function_tool
else:
raise UnsupportedAssistantToolType(
message_format="Unsupported assistant tool type: {tool_type}",
tool_type=tool["type"],
target=ErrorTarget.EXECUTOR,
)
def _load_tool_as_function(self, tool: dict):
tool_resolver = ToolResolver(self._working_dir)
node, predefined_inputs = self._generate_node_for_tool(tool)
resolved_tool = tool_resolver.resolve_tool_by_node(node, convert_input_types=False)
func_name = resolved_tool.definition.function
definition = self._generate_tool_definition(
func_name, resolved_tool.definition.description, predefined_inputs
)
if resolved_tool.node.inputs:
inputs = {name: value.value for name, value in resolved_tool.node.inputs.items()}
func = partial(resolved_tool.callable, **inputs)
else:
func = resolved_tool.callable
return AssistantTool(name=func_name, openai_definition=definition, func=func)
def _generate_node_for_tool(self, tool: dict):
predefined_inputs = {}
for input_name, value in tool.get("predefined_inputs", {}).items():
predefined_inputs[input_name] = InputAssignment.deserialize(value)
node = Node(
name="assistant_node",
tool="assistant_tool",
inputs=predefined_inputs,
source=ToolSource.deserialize(tool["source"]) if "source" in tool else None,
type=ToolType.PYTHON if "tool_type" in tool and tool["tool_type"] == "python" else None,
)
return node, list(predefined_inputs.keys())
def invoke_tool(self, func_name, kwargs):
return self._assistant_tools[func_name].func(**kwargs)
def to_openai_tools(self):
return [tool.openai_definition for tool in self._assistant_tools.values()]
def _generate_tool_definition(self, func_name: str, description: str, predefined_inputs: list) -> dict:
to_openai_type = {
"str": "string", "int": "number", "float": "number", "bool": "boolean", "list": "array", "dict": "object"
}
description, params = DocstringParser.parse(description)
for input in predefined_inputs:
if input in params:
params.pop(input)
for _, param in params.items():
param["type"] = to_openai_type[param["type"]] if param["type"] in to_openai_type else param["type"]
return {
"type": "function",
"function": {
"name": func_name,
"description": description,
"parameters": {
"type": "object",
"properties": params,
"required": list(params.keys())
}
}
}
| promptflow/src/promptflow/promptflow/executor/_assistant_tool_invoker.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_assistant_tool_invoker.py",
"repo_id": "promptflow",
"token_count": 1870
} | 44 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction, AgentFinish, LLMResult
from promptflow._core.tracer import Trace, Tracer, TraceType
class LangChainEventType(Enum):
LLM = "LLM", 0
CHAIN = "CHAIN", 1
TOOL = "TOOL", 2
AGENT = "AGENT", 3
def __init__(self, _: str, level: int):
self._level = level
def __lt__(self, other: "LangChainEventType"):
return self._level < other._level
class PromptFlowCallbackHandler(BaseCallbackHandler):
""":class:`~promptflow.integrations.langchain.PromptFlowCallbackHandler` implements the
`langchain.callbacks.base.BaseCallbackHandler` interface, which has a method for each event that
can be subscribed to. The appropriate method will be called on the handler when the event is triggered.
"""
def __init__(self):
super().__init__()
self._tracer = Tracer.active_instance()
self._events_stack = [] # Use this to track the current event type to avoid popping the wrong event
@property
def always_verbose(self) -> bool:
"""Whether to always be verbose."""
return True
def _push(self, trace: Trace):
if not self._tracer:
return
self._tracer._push(trace)
def _pop(self, output=None, error: Optional[Exception] = None, event_type: Optional[LangChainEventType] = None):
"""Pop the trace from the trace stack.
PromptFlowCallbackHandler assumed that the langchain events are called in paris, with a corresponding
start and end event. However, this is not always true. Therefore, this function uses the event stack to
keep track of the current event type, in order to avoid popping the wrong event.
The function performs the following steps:
1. If the trace stack is empty, it simply returns without popping anything.
2. If the event type is None, it pops the top of the trace stack.
3. If the top of the event stack is equal to the given event type, it pops the top of the event stack
and trace stack.
4. If the top of the event stack is less than the given event type, indicating the previous event
without a corresponding end, it first pops the top of the event stack and then recursively calls the
_pop function to continue popping until the correct event type is found.
5. If the top of the event stack is greater than the given event type, indicating the current event
without a corresponding start, it simply returns without popping anything.
By following this approach, the function ensures that only the correct events are popped from the stacks.
"""
if not self._tracer:
return
if not event_type:
self._tracer._pop(output, error)
else:
if not self._events_stack:
return
if self._events_stack[-1] == event_type:
self._events_stack.pop()
self._tracer._pop(output, error)
elif self._events_stack[-1] < event_type:
self._events_stack.pop()
self._tracer._pop()
self._pop(output, error, event_type)
else:
return
def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) -> None:
"""Run when LLM starts running.
:param serialized: The serialized LLM object.
:type serialized: Dict[str, Any]
:param prompts: The prompts used to run LLM.
:type prompts: List[str]
"""
name = self._get_name(serialized) or "LLM"
trace = Trace(name, TraceType.LANGCHAIN, {"prompts": prompts})
self._events_stack.append(LangChainEventType.LLM)
self._push(trace)
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled.
:param token: The new token.
:type token: str
"""
pass # Wo do not handle this event
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running.
:param response: The response from LLM.
:type response: LLMResult
"""
output = response
self._pop(output, event_type=LangChainEventType.LLM)
def on_llm_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Run when LLM errors.
:param error: The error from LLM.
:type error: Union[Exception, KeyboardInterrupt]
"""
self._pop(error=error, event_type=LangChainEventType.LLM)
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> None:
"""Run when chain starts running.
:param serialized: The serialized chain object.
:type serialized: Dict[str, Any]
:param inputs: The inputs used to run chain.
:type inputs: Dict[str, Any]
"""
name = self._get_name(serialized) or "Chain"
trace = Trace(name, TraceType.LANGCHAIN, inputs)
self._events_stack.append(LangChainEventType.CHAIN)
self._push(trace)
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Run when chain ends running.
:param outputs: The outputs from chain.
:type outputs: Dict[str, Any]
"""
self._pop(outputs, event_type=LangChainEventType.CHAIN)
def on_chain_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Run when chain errors.
:param error: The error from chain.
:type error: Union[Exception, KeyboardInterrupt]
"""
self._pop(error=error, event_type=LangChainEventType.CHAIN)
def on_tool_start(self, serialized: Dict[str, Any], input_str: str, **kwargs: Any) -> None:
"""Run when tool starts running.
:param serialized: The serialized tool object.
:type serialized: Dict[str, Any]
:param input_str: The input string used to run tool.
:type input_str: str
"""
name = self._get_name(serialized) or "Tool"
trace = Trace(name, TraceType.LANGCHAIN, {"input_str": input_str})
self._events_stack.append(LangChainEventType.TOOL)
self._push(trace)
def on_tool_end(self, output: str, **kwargs: Any) -> None:
"""Run when tool ends running.
:param output: The output from tool.
:type output: str
"""
self._pop(output, event_type=LangChainEventType.TOOL)
def on_tool_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Run when tool errors.
:param error: The error from tool.
:type error: Union[Exception, KeyboardInterrupt]
"""
self._pop(error=error, event_type=LangChainEventType.TOOL)
def on_text(self, text: str, **kwargs: Any) -> None:
"""Run on arbitrary text.
:param text: The text.
:type text: str
"""
pass
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> None:
"""Run on agent action.
:param action: The action from agent.
:type action: AgentAction
"""
name = action.tool
trace = Trace(name, TraceType.LANGCHAIN, {"tool_input": action.tool_input})
self._events_stack.append(LangChainEventType.AGENT)
self._push(trace)
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:
"""Run on agent end.
:param finish: The finish from agent.
:type finish: AgentFinish
"""
output = finish.return_values
self._pop(output, event_type=LangChainEventType.AGENT)
def _get_name(self, serialized: Dict[str, Any]):
# For version 0.0.197 and earlier, the name is stored in the "name" field,
# and for later versions, the name is stored in the "id" field.
# If none exists, return None and use a default name.
if "name" in serialized.keys():
return serialized["name"]
elif "id" in serialized.keys() and isinstance(serialized["id"], list):
return serialized["id"][-1]
else:
return None
| promptflow/src/promptflow/promptflow/integrations/langchain.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/integrations/langchain.py",
"repo_id": "promptflow",
"token_count": 3372
} | 45 |
import asyncio
import multiprocessing
import os
import uuid
from pathlib import Path
from tempfile import mkdtemp
import pytest
from promptflow._utils.utils import dump_list_to_jsonl
from promptflow.batch._batch_engine import OUTPUT_FILE_NAME, BatchEngine
from promptflow.batch._errors import EmptyInputsData
from promptflow.batch._result import BatchResult
from promptflow.contracts.run_info import Status
from promptflow.executor._errors import InputNotFound
from ..utils import (
MemoryRunStorage,
get_flow_expected_metrics,
get_flow_expected_status_summary,
get_flow_folder,
get_flow_inputs_file,
get_flow_sample_inputs,
get_yaml_file,
load_jsonl,
)
SAMPLE_FLOW = "web_classification_no_variants"
SAMPLE_EVAL_FLOW = "classification_accuracy_evaluation"
SAMPLE_FLOW_WITH_PARTIAL_FAILURE = "python_tool_partial_failure"
async def async_submit_batch_run(flow_folder, inputs_mapping, connections):
batch_result = submit_batch_run(flow_folder, inputs_mapping, connections=connections)
await asyncio.sleep(1)
return batch_result
def run_batch_with_start_method(multiprocessing_start_method, flow_folder, inputs_mapping, dev_connections):
os.environ["PF_BATCH_METHOD"] = multiprocessing_start_method
batch_result, output_dir = submit_batch_run(
flow_folder, inputs_mapping, connections=dev_connections, return_output_dir=True
)
assert isinstance(batch_result, BatchResult)
nlines = get_batch_inputs_line(flow_folder)
assert batch_result.total_lines == nlines
assert batch_result.completed_lines == nlines
assert batch_result.start_time < batch_result.end_time
assert batch_result.system_metrics.duration > 0
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == nlines
for i, output in enumerate(outputs):
assert isinstance(output, dict)
assert "line_number" in output, f"line_number is not in {i}th output {output}"
assert output["line_number"] == i, f"line_number is not correct in {i}th output {output}"
def submit_batch_run(
flow_folder,
inputs_mapping,
*,
input_dirs={},
input_file_name="samples.json",
run_id=None,
connections={},
storage=None,
return_output_dir=False,
):
batch_engine = BatchEngine(
get_yaml_file(flow_folder), get_flow_folder(flow_folder), connections=connections, storage=storage
)
if not input_dirs and inputs_mapping:
input_dirs = {"data": get_flow_inputs_file(flow_folder, file_name=input_file_name)}
output_dir = Path(mkdtemp())
if return_output_dir:
return batch_engine.run(input_dirs, inputs_mapping, output_dir, run_id=run_id), output_dir
return batch_engine.run(input_dirs, inputs_mapping, output_dir, run_id=run_id)
def get_batch_inputs_line(flow_folder, sample_inputs_file="samples.json"):
inputs = get_flow_sample_inputs(flow_folder, sample_inputs_file=sample_inputs_file)
return len(inputs)
@pytest.mark.usefixtures("use_secrets_config_file", "dev_connections")
@pytest.mark.e2etest
class TestBatch:
def test_batch_storage(self, dev_connections):
mem_run_storage = MemoryRunStorage()
run_id = str(uuid.uuid4())
inputs_mapping = {"url": "${data.url}"}
batch_result = submit_batch_run(
SAMPLE_FLOW, inputs_mapping, run_id=run_id, connections=dev_connections, storage=mem_run_storage
)
nlines = get_batch_inputs_line(SAMPLE_FLOW)
assert batch_result.total_lines == nlines
assert batch_result.completed_lines == nlines
assert len(mem_run_storage._flow_runs) == nlines
assert all(flow_run_info.status == Status.Completed for flow_run_info in mem_run_storage._flow_runs.values())
assert all(node_run_info.status == Status.Completed for node_run_info in mem_run_storage._node_runs.values())
@pytest.mark.parametrize(
"flow_folder, inputs_mapping",
[
(
SAMPLE_FLOW,
{"url": "${data.url}"},
),
(
"prompt_tools",
{"text": "${data.text}"},
),
(
"script_with___file__",
{"text": "${data.text}"},
),
(
"sample_flow_with_functions",
{"question": "${data.question}"},
),
],
)
def test_batch_run(self, flow_folder, inputs_mapping, dev_connections):
batch_result, output_dir = submit_batch_run(
flow_folder, inputs_mapping, connections=dev_connections, return_output_dir=True
)
assert isinstance(batch_result, BatchResult)
nlines = get_batch_inputs_line(flow_folder)
assert batch_result.total_lines == nlines
assert batch_result.completed_lines == nlines
assert batch_result.start_time < batch_result.end_time
assert batch_result.system_metrics.duration > 0
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == nlines
for i, output in enumerate(outputs):
assert isinstance(output, dict)
assert "line_number" in output, f"line_number is not in {i}th output {output}"
assert output["line_number"] == i, f"line_number is not correct in {i}th output {output}"
@pytest.mark.parametrize(
"flow_folder, inputs_mapping",
[
(
SAMPLE_FLOW,
{"url": "${data.url}"},
),
(
"prompt_tools",
{"text": "${data.text}"},
),
(
"script_with___file__",
{"text": "${data.text}"},
),
(
"sample_flow_with_functions",
{"question": "${data.question}"},
),
],
)
def test_spawn_mode_batch_run(self, flow_folder, inputs_mapping, dev_connections):
if "spawn" not in multiprocessing.get_all_start_methods():
pytest.skip("Unsupported start method: spawn")
p = multiprocessing.Process(
target=run_batch_with_start_method, args=("spawn", flow_folder, inputs_mapping, dev_connections)
)
p.start()
p.join()
assert p.exitcode == 0
@pytest.mark.parametrize(
"flow_folder, inputs_mapping",
[
(
SAMPLE_FLOW,
{"url": "${data.url}"},
),
(
"prompt_tools",
{"text": "${data.text}"},
),
(
"script_with___file__",
{"text": "${data.text}"},
),
(
"sample_flow_with_functions",
{"question": "${data.question}"},
),
],
)
def test_forkserver_mode_batch_run(self, flow_folder, inputs_mapping, dev_connections):
if "forkserver" not in multiprocessing.get_all_start_methods():
pytest.skip("Unsupported start method: forkserver")
p = multiprocessing.Process(
target=run_batch_with_start_method, args=("forkserver", flow_folder, inputs_mapping, dev_connections)
)
p.start()
p.join()
assert p.exitcode == 0
def test_batch_run_then_eval(self, dev_connections):
batch_resutls, output_dir = submit_batch_run(
SAMPLE_FLOW, {"url": "${data.url}"}, connections=dev_connections, return_output_dir=True
)
nlines = get_batch_inputs_line(SAMPLE_FLOW)
assert batch_resutls.completed_lines == nlines
input_dirs = {"data": get_flow_inputs_file(SAMPLE_FLOW, file_name="samples.json"), "run.outputs": output_dir}
inputs_mapping = {
"variant_id": "baseline",
"groundtruth": "${data.url}",
"prediction": "${run.outputs.category}",
}
eval_result = submit_batch_run(SAMPLE_EVAL_FLOW, inputs_mapping, input_dirs=input_dirs)
assert eval_result.completed_lines == nlines, f"Only returned {eval_result.completed_lines}/{nlines} outputs."
assert len(eval_result.metrics) > 0, "No metrics are returned."
assert eval_result.metrics["accuracy"] == 0, f"Accuracy should be 0, got {eval_result.metrics}."
def test_batch_with_metrics(self, dev_connections):
flow_folder = SAMPLE_EVAL_FLOW
inputs_mapping = {
"variant_id": "${data.variant_id}",
"groundtruth": "${data.groundtruth}",
"prediction": "${data.prediction}",
}
batch_results = submit_batch_run(flow_folder, inputs_mapping, connections=dev_connections)
assert isinstance(batch_results, BatchResult)
assert isinstance(batch_results.metrics, dict)
assert batch_results.metrics == get_flow_expected_metrics(flow_folder)
assert batch_results.total_lines == batch_results.completed_lines
assert batch_results.node_status == get_flow_expected_status_summary(flow_folder)
def test_batch_with_partial_failure(self, dev_connections):
flow_folder = SAMPLE_FLOW_WITH_PARTIAL_FAILURE
inputs_mapping = {"idx": "${data.idx}", "mod": "${data.mod}", "mod_2": "${data.mod_2}"}
batch_results = submit_batch_run(flow_folder, inputs_mapping, connections=dev_connections)
assert isinstance(batch_results, BatchResult)
assert batch_results.total_lines == 10
assert batch_results.completed_lines == 5
assert batch_results.failed_lines == 5
assert batch_results.node_status == get_flow_expected_status_summary(flow_folder)
def test_batch_with_line_number(self, dev_connections):
flow_folder = SAMPLE_FLOW_WITH_PARTIAL_FAILURE
input_dirs = {"data": "inputs/data.jsonl", "output": "inputs/output.jsonl"}
inputs_mapping = {"idx": "${output.idx}", "mod": "${data.mod}", "mod_2": "${data.mod_2}"}
batch_results, output_dir = submit_batch_run(
flow_folder, inputs_mapping, input_dirs=input_dirs, connections=dev_connections, return_output_dir=True
)
assert isinstance(batch_results, BatchResult)
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == 2
assert outputs == [
{"line_number": 0, "output": 1},
{"line_number": 6, "output": 7},
]
def test_batch_with_openai_metrics(self, dev_connections):
inputs_mapping = {"url": "${data.url}"}
batch_result, output_dir = submit_batch_run(
SAMPLE_FLOW, inputs_mapping, connections=dev_connections, return_output_dir=True
)
nlines = get_batch_inputs_line(SAMPLE_FLOW)
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == nlines
assert batch_result.system_metrics.total_tokens > 0
assert batch_result.system_metrics.prompt_tokens > 0
assert batch_result.system_metrics.completion_tokens > 0
def test_batch_with_default_input(self):
mem_run_storage = MemoryRunStorage()
default_input_value = "input value from default"
inputs_mapping = {"text": "${data.text}"}
batch_result, output_dir = submit_batch_run(
"default_input", inputs_mapping, storage=mem_run_storage, return_output_dir=True
)
assert batch_result.total_lines == batch_result.completed_lines
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == 1
assert outputs[0]["output"] == default_input_value
assert all(
node_run_info.status == Status.Completed and node_run_info.output == [default_input_value]
for node_run_info in mem_run_storage._node_runs.values()
if node_run_info.node == "aggregate_node"
)
@pytest.mark.parametrize(
"flow_folder, batch_input, expected_type",
[
("simple_aggregation", [{"text": 4}], str),
("simple_aggregation", [{"text": 4.5}], str),
("simple_aggregation", [{"text": "3.0"}], str),
],
)
def test_batch_run_line_result(self, flow_folder, batch_input, expected_type):
mem_run_storage = MemoryRunStorage()
input_file = Path(mkdtemp()) / "inputs.jsonl"
dump_list_to_jsonl(input_file, batch_input)
input_dirs = {"data": input_file}
inputs_mapping = {"text": "${data.text}"}
batch_results = submit_batch_run(flow_folder, inputs_mapping, input_dirs=input_dirs, storage=mem_run_storage)
assert isinstance(batch_results, BatchResult)
assert all(
type(flow_run_info.inputs["text"]) is expected_type for flow_run_info in mem_run_storage._flow_runs.values()
)
@pytest.mark.parametrize(
"flow_folder, input_mapping, error_class, error_message",
[
(
"connection_as_input",
{},
InputNotFound,
"The input for flow cannot be empty in batch mode. Please review your flow and provide valid inputs.",
),
(
"script_with___file__",
{"text": "${data.text}"},
EmptyInputsData,
"Couldn't find any inputs data at the given input paths. Please review the provided path "
"and consider resubmitting.",
),
],
)
def test_batch_run_failure(self, flow_folder, input_mapping, error_class, error_message):
with pytest.raises(error_class) as e:
submit_batch_run(flow_folder, input_mapping, input_file_name="empty_inputs.jsonl")
assert error_message in e.value.message
def test_batch_run_in_existing_loop(self, dev_connections):
flow_folder = "prompt_tools"
inputs_mapping = {"text": "${data.text}"}
batch_result = asyncio.run(async_submit_batch_run(flow_folder, inputs_mapping, dev_connections))
assert isinstance(batch_result, BatchResult)
assert batch_result.total_lines == batch_result.completed_lines
def test_batch_run_with_aggregation_failure(self, dev_connections):
flow_folder = "aggregation_node_failed"
inputs_mapping = {"groundtruth": "${data.groundtruth}", "prediction": "${data.prediction}"}
batch_result = submit_batch_run(flow_folder, inputs_mapping, connections=dev_connections)
assert isinstance(batch_result, BatchResult)
assert batch_result.total_lines == batch_result.completed_lines
assert batch_result.node_status == get_flow_expected_status_summary(flow_folder)
# assert aggregation node error summary
assert batch_result.failed_lines == 0
aggre_node_error = batch_result.error_summary.aggr_error_dict["aggregate"]
assert aggre_node_error["message"] == "Execution failure in 'aggregate': (ZeroDivisionError) division by zero"
assert aggre_node_error["code"] == "UserError"
assert aggre_node_error["innerError"] == {"code": "ToolExecutionError", "innerError": None}
| promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py",
"repo_id": "promptflow",
"token_count": 6618
} | 46 |
inputs:
text:
type: string
outputs:
output:
type: string
reference: ${my_custom_llm_tool.output}
nodes:
- name: my_custom_llm_tool
type: custom_llm
source:
type: package_with_prompt
tool: custom_llm_tool.TestCustomLLMTool.call
path: ./my_prompt.jinja2
inputs:
connection: azure_open_ai_connection
connection_2: azure_open_ai_connection
api: completion
text: ${inputs.text}
| promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 176
} | 47 |
import logging
from collections import namedtuple
from importlib.metadata import version
from types import GeneratorType
from unittest.mock import MagicMock, patch
import openai
import pytest
from promptflow._core.openai_injector import (
PROMPTFLOW_PREFIX,
USER_AGENT_HEADER,
_generate_api_and_injector,
_openai_api_list,
get_aoai_telemetry_headers,
inject_async,
inject_openai_api,
inject_operation_headers,
inject_sync,
recover_openai_api,
)
from promptflow._core.operation_context import OperationContext
from promptflow._core.tracer import Tracer
from promptflow._version import VERSION
from promptflow.connections import AzureOpenAIConnection
from promptflow.exceptions import UserErrorException
from promptflow.tools.aoai import AzureOpenAI
from promptflow.tools.embedding import embedding
IS_LEGACY_OPENAI = version("openai").startswith("0.")
# Mock classes and functions for test
class MockAPI:
def create(self):
pass
@pytest.mark.unittest
def test_inject_operation_headers_sync():
@inject_operation_headers
def f(**kwargs):
return kwargs
if IS_LEGACY_OPENAI:
headers = "headers"
kwargs_1 = {"headers": {"a": 1, "b": 2}}
kwargs_2 = {"headers": {"ms-azure-ai-promptflow-called-from": "aoai-tool"}}
else:
headers = "extra_headers"
kwargs_1 = {"extra_headers": {"a": 1, "b": 2}}
kwargs_2 = {"extra_headers": {"ms-azure-ai-promptflow-called-from": "aoai-tool"}}
injected_headers = get_aoai_telemetry_headers()
assert f(a=1, b=2) == {"a": 1, "b": 2, headers: injected_headers}
merged_headers = {**injected_headers, "a": 1, "b": 2}
assert f(**kwargs_1) == {headers: merged_headers}
aoai_tools_headers = injected_headers.copy()
aoai_tools_headers.update({"ms-azure-ai-promptflow-called-from": "aoai-tool"})
assert f(**kwargs_2) == {headers: aoai_tools_headers}
@pytest.mark.unittest
@pytest.mark.asyncio
async def test_inject_operation_headers_async():
@inject_operation_headers
async def f(**kwargs):
return kwargs
if IS_LEGACY_OPENAI:
headers = "headers"
kwargs_1 = {"headers": {"a": 1, "b": 2}}
kwargs_2 = {"headers": {"ms-azure-ai-promptflow-called-from": "aoai-tool"}}
else:
headers = "extra_headers"
kwargs_1 = {"extra_headers": {"a": 1, "b": 2}}
kwargs_2 = {"extra_headers": {"ms-azure-ai-promptflow-called-from": "aoai-tool"}}
injected_headers = get_aoai_telemetry_headers()
assert await f(a=1, b=2) == {"a": 1, "b": 2, headers: injected_headers}
merged_headers = {**injected_headers, "a": 1, "b": 2}
assert await f(**kwargs_1) == {headers: merged_headers}
aoai_tools_headers = injected_headers.copy()
aoai_tools_headers.update({"ms-azure-ai-promptflow-called-from": "aoai-tool"})
assert await f(**kwargs_2) == {headers: aoai_tools_headers}
@pytest.mark.unittest
def test_aoai_generator_proxy_sync():
def mock_aoai(**kwargs):
# check if args has a stream parameter
if "stream" in kwargs and kwargs["stream"]:
# stream parameter is true, yield a string
def generator():
yield "This is a yielded string"
return generator()
else:
# stream parameter is false or not given, return a string
return "This is a returned string"
if IS_LEGACY_OPENAI:
apis = ["openai.Completion.create", "openai.ChatCompletion.create", "openai.Embedding.create"]
else:
apis = [
"openai.resources.Completions.create",
"openai.resources.chat.Completions.create",
"openai.resources.Embeddings.create",
]
with patch(apis[0], new=mock_aoai), patch(apis[1], new=mock_aoai), patch(apis[2], new=mock_aoai):
Tracer.start_tracing("mock_run_id")
inject_openai_api()
if IS_LEGACY_OPENAI:
return_string = openai.Completion.create(stream=False)
return_generator = openai.Completion.create(stream=True)
else:
return_string = openai.resources.Completions.create(stream=False)
return_generator = openai.resources.Completions.create(stream=True)
assert return_string == "This is a returned string"
assert isinstance(return_generator, GeneratorType)
for _ in return_generator:
pass
traces = Tracer.end_tracing()
assert len(traces) == 2
for trace in traces:
assert trace["type"] == "LLM"
if trace["inputs"]["stream"]:
assert trace["output"] == ["This is a yielded string"]
else:
assert trace["output"] == "This is a returned string"
@pytest.mark.unittest
@pytest.mark.asyncio
async def test_aoai_generator_proxy_async():
async def mock_aoai(**kwargs):
# check if args has a stream parameter
if "stream" in kwargs and kwargs["stream"]:
# stream parameter is true, yield a string
def generator():
yield "This is a yielded string"
return generator()
else:
# stream parameter is false or not given, return a string
return "This is a returned string"
if IS_LEGACY_OPENAI:
apis = ["openai.Completion.acreate", "openai.ChatCompletion.acreate", "openai.Embedding.acreate"]
else:
apis = [
"openai.resources.AsyncCompletions.create",
"openai.resources.chat.AsyncCompletions.create",
"openai.resources.AsyncEmbeddings.create",
]
with patch(apis[0], new=mock_aoai), patch(apis[1], new=mock_aoai), patch(apis[2], new=mock_aoai):
Tracer.start_tracing("mock_run_id")
inject_openai_api()
if IS_LEGACY_OPENAI:
return_string = await openai.Completion.acreate(stream=False)
return_generator = await openai.Completion.acreate(stream=True)
else:
return_string = await openai.resources.AsyncCompletions.create(stream=False)
return_generator = await openai.resources.AsyncCompletions.create(stream=True)
assert return_string == "This is a returned string"
assert isinstance(return_generator, GeneratorType)
for _ in return_generator:
pass
traces = Tracer.end_tracing()
assert len(traces) == 2
for trace in traces:
assert trace["type"] == "LLM"
if trace["inputs"]["stream"]:
assert trace["output"] == ["This is a yielded string"]
else:
assert trace["output"] == "This is a returned string"
@pytest.mark.unittest
def test_aoai_call_inject():
if IS_LEGACY_OPENAI:
headers = "headers"
apis = ["openai.Completion.create", "openai.ChatCompletion.create", "openai.Embedding.create"]
else:
headers = "extra_headers"
apis = [
"openai.resources.Completions.create",
"openai.resources.chat.Completions.create",
"openai.resources.Embeddings.create",
]
def mock_aoai(**kwargs):
return kwargs.get(headers)
with patch(apis[0], new=mock_aoai), patch(apis[1], new=mock_aoai), patch(apis[2], new=mock_aoai):
inject_openai_api()
injected_headers = get_aoai_telemetry_headers()
if IS_LEGACY_OPENAI:
return_headers_1 = openai.Completion.create(headers=None)
return_headers_2 = openai.ChatCompletion.create(headers="abc")
return_headers_3 = openai.Embedding.create(headers=1)
else:
return_headers_1 = openai.resources.Completions.create(extra_headers=None)
return_headers_2 = openai.resources.chat.Completions.create(extra_headers="abc")
return_headers_3 = openai.resources.Embeddings.create(extra_headers=1)
assert return_headers_1 is not None
assert injected_headers.items() <= return_headers_1.items()
assert return_headers_2 is not None
assert injected_headers.items() <= return_headers_2.items()
assert return_headers_3 is not None
assert injected_headers.items() <= return_headers_3.items()
@pytest.mark.unittest
def test_aoai_tool_header():
def mock_complete(*args, **kwargs):
Response = namedtuple("Response", ["choices"])
Choice = namedtuple("Choice", ["text"])
choice = Choice(text=kwargs.get("extra_headers", {}))
response = Response(choices=[choice])
return response
def mock_chat(*args, **kwargs):
Completion = namedtuple("Completion", ["choices"])
Choice = namedtuple("Choice", ["message"])
Message = namedtuple("Message", ["content"])
message = Message(content=kwargs.get("extra_headers", {}))
choice = Choice(message=message)
completion = Completion(choices=[choice])
return completion
def mock_embedding(*args, **kwargs):
Response = namedtuple("Response", ["data"])
Embedding = namedtuple("Embedding", ["embedding"])
response = Response(data=[Embedding(embedding=kwargs.get("extra_headers", {}))])
return response
with patch("openai.resources.Completions.create", new=mock_complete), patch(
"openai.resources.chat.Completions.create", new=mock_chat
), patch("openai.resources.Embeddings.create", new=mock_embedding):
inject_openai_api()
aoai_tool_header = {"ms-azure-ai-promptflow-called-from": "aoai-tool"}
return_headers = AzureOpenAI(AzureOpenAIConnection(api_key="test", api_base="test")).completion(
prompt="test", deployment_name="test"
)
assert aoai_tool_header.items() <= return_headers.items()
return_headers = AzureOpenAI(AzureOpenAIConnection(api_key="test", api_base="test")).chat(
prompt="user:\ntest", deployment_name="test"
)
assert aoai_tool_header.items() <= return_headers.items()
return_headers = embedding(
AzureOpenAIConnection(api_key="test", api_base="test"), input="test", deployment_name="test"
)
assert aoai_tool_header.items() <= return_headers.items()
@pytest.mark.unittest
def test_aoai_chat_tool_prompt():
def mock_chat(*args, **kwargs):
Completion = namedtuple("Completion", ["choices"])
Choice = namedtuple("Choice", ["message"])
Message = namedtuple("Message", ["content"])
message = Message(content=kwargs.get("messages", {}))
choice = Choice(message=message)
completion = Completion(choices=[choice])
return completion
with patch("openai.resources.chat.Completions.create", new=mock_chat):
inject_openai_api()
return_messages = AzureOpenAI(AzureOpenAIConnection(api_key="test", api_base="test")).chat(
prompt="user:\ntest", deployment_name="test"
)
assert return_messages == [{"role": "user", "content": "test"}]
return_messages = AzureOpenAI(AzureOpenAIConnection(api_key="test", api_base="test")).chat(
prompt="user:\r\n", deployment_name="test"
)
assert return_messages == [{"role": "user", "content": ""}]
with pytest.raises(UserErrorException, match="The Chat API requires a specific format for prompt"):
AzureOpenAI(AzureOpenAIConnection(api_key="test", api_base="test")).chat(
prompt="user:", deployment_name="test"
)
# The new generator-based test function
@pytest.mark.parametrize(
"is_legacy, expected_apis_with_injectors",
[
(
True,
[
(
(
("openai", "Completion", "create"),
("openai", "ChatCompletion", "create"),
("openai", "Embedding", "create"),
),
inject_sync,
),
(
(
("openai", "Completion", "acreate"),
("openai", "ChatCompletion", "acreate"),
("openai", "Embedding", "acreate"),
),
inject_async,
),
],
),
(
False,
[
(
(
("openai.resources.chat", "Completions", "create"),
("openai.resources", "Completions", "create"),
("openai.resources", "Embeddings", "create"),
),
inject_sync,
),
(
(
("openai.resources.chat", "AsyncCompletions", "create"),
("openai.resources", "AsyncCompletions", "create"),
("openai.resources", "AsyncEmbeddings", "create"),
),
inject_async,
),
],
),
],
)
def test_api_list(is_legacy, expected_apis_with_injectors):
with patch("promptflow._core.openai_injector.IS_LEGACY_OPENAI", is_legacy):
# Using list comprehension to get all items from the generator
actual_apis_with_injectors = list(_openai_api_list())
# Assert that the actual list matches the expected list
assert actual_apis_with_injectors == expected_apis_with_injectors
@pytest.mark.parametrize(
"apis_with_injectors, expected_output, expected_logs",
[
([((("MockModule", "MockAPI", "create"),), inject_sync)], [(MockAPI, "create", inject_sync)], []),
([((("MockModule", "MockAPI", "create"),), inject_async)], [(MockAPI, "create", inject_async)], []),
],
)
def test_generate_api_and_injector(apis_with_injectors, expected_output, expected_logs, caplog):
with patch("importlib.import_module", return_value=MagicMock(MockAPI=MockAPI)) as mock_import_module:
# Capture the logs
with caplog.at_level(logging.WARNING):
# Run the generator and collect the output
result = list(_generate_api_and_injector(apis_with_injectors))
# Check if the result matches the expected output
assert result == expected_output
# Check if the logs match the expected logs
assert len(caplog.records) == len(expected_logs)
for record, expected_message in zip(caplog.records, expected_logs):
assert expected_message in record.message
mock_import_module.assert_called_with("MockModule")
def test_generate_api_and_injector_attribute_error_logging(caplog):
apis = [
((("NonExistentModule", "NonExistentAPI", "create"),), MagicMock()),
((("MockModuleMissingMethod", "MockAPIMissingMethod", "missing_method"),), MagicMock()),
]
# Set up the side effect for the mock
def import_module_effect(name):
if name == "MockModuleMissingMethod":
module = MagicMock()
delattr(module, "MockAPIMissingMethod") # Use delattr to remove the attribute
return module
else:
raise ModuleNotFoundError(f"No module named '{name}'")
with patch("importlib.import_module") as mock_import_module:
mock_import_module.side_effect = import_module_effect
with caplog.at_level(logging.WARNING):
list(_generate_api_and_injector(apis))
assert len(caplog.records) == 2
assert "An unexpected error occurred" in caplog.records[0].message
assert "NonExistentModule" in caplog.records[0].message
assert "does not have the class" in caplog.records[1].message
assert "MockAPIMissingMethod" in caplog.records[1].message
# Verify that `importlib.import_module` was called with correct module names
mock_import_module.assert_any_call("NonExistentModule")
mock_import_module.assert_any_call("MockModuleMissingMethod")
@pytest.mark.unittest
def test_get_aoai_telemetry_headers():
# create a mock operation context
mock_operation_context = OperationContext()
mock_operation_context.user_agent = "test-user-agent"
mock_operation_context.update(
{
"flow_id": "test-flow-id",
"root_run_id": "test-root-run-id",
"index": 1,
"run_id": "test-run-id",
"variant_id": "test-variant-id",
}
)
# patch the OperationContext.get_instance method to return the mock operation context
with patch("promptflow._core.operation_context.OperationContext.get_instance") as mock_get_instance:
mock_get_instance.return_value = mock_operation_context
# call the function under test and get the headers
headers = get_aoai_telemetry_headers()
for key in headers.keys():
assert key.startswith(PROMPTFLOW_PREFIX) or key == USER_AGENT_HEADER
assert "_" not in key
# assert that the headers are correct
assert headers[USER_AGENT_HEADER] == f"test-user-agent promptflow/{VERSION}"
assert headers[f"{PROMPTFLOW_PREFIX}flow-id"] == "test-flow-id"
assert headers[f"{PROMPTFLOW_PREFIX}root-run-id"] == "test-root-run-id"
assert headers[f"{PROMPTFLOW_PREFIX}index"] == "1"
assert headers[f"{PROMPTFLOW_PREFIX}run-id"] == "test-run-id"
assert headers[f"{PROMPTFLOW_PREFIX}variant-id"] == "test-variant-id"
@pytest.mark.unittest
def test_inject_and_recover_openai_api():
class FakeAPIWithoutOriginal:
@staticmethod
def create():
pass
class FakeAPIWithOriginal:
@staticmethod
def create():
pass
def dummy_api():
pass
# Real injector function that adds an _original attribute
def injector(f):
def wrapper_fun(*args, **kwargs):
return f(*args, **kwargs)
wrapper_fun._original = f
return wrapper_fun
# Set an _original attribute for the create method of FakeAPIWithOriginal
FakeAPIWithOriginal.create._original = dummy_api
# Store the original create methods before injection
original_api_without_original = FakeAPIWithoutOriginal.create
original_api_with_original = FakeAPIWithOriginal.create
# Mock the generator function to yield our mocked api and method
with patch(
"promptflow._core.openai_injector.available_openai_apis_and_injectors",
return_value=[(FakeAPIWithoutOriginal, "create", injector), (FakeAPIWithOriginal, "create", injector)],
):
# Call the function to inject the APIs
inject_openai_api()
# Check that the _original attribute was set for the method that didn't have it
assert hasattr(FakeAPIWithoutOriginal.create, "_original")
# Ensure the _original attribute points to the correct original method
assert FakeAPIWithoutOriginal.create._original is original_api_without_original
# Check that the injector was not applied again to the method that already had an _original attribute
# The _original attribute should still point to the mock, not the original method
assert getattr(FakeAPIWithOriginal.create, "_original") is not FakeAPIWithOriginal.create
# The original method should remain unchanged
assert FakeAPIWithOriginal.create is original_api_with_original
# Call the function to recover the APIs
recover_openai_api()
# Check that the _original attribute was removed for the method that didn't have it
assert not hasattr(FakeAPIWithoutOriginal.create, "_original")
assert not hasattr(FakeAPIWithOriginal.create, "_original")
# The original methods should be restored
assert FakeAPIWithoutOriginal.create is original_api_without_original
assert FakeAPIWithOriginal.create is dummy_api
| promptflow/src/promptflow/tests/executor/unittests/_core/test_api_injector.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_core/test_api_injector.py",
"repo_id": "promptflow",
"token_count": 8507
} | 48 |
import pytest
from promptflow._utils.feature_utils import Feature, get_feature_list
@pytest.mark.unittest
def test_get_feature_list():
feature_list = get_feature_list()
assert isinstance(feature_list, list)
assert all(isinstance(feature, Feature) for feature in feature_list)
| promptflow/src/promptflow/tests/executor/unittests/_utils/test_feature_utils.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_utils/test_feature_utils.py",
"repo_id": "promptflow",
"token_count": 93
} | 49 |
import pytest
from promptflow.contracts.multimedia import Image, PFBytes
@pytest.mark.unittest
class TestMultimediaContract:
@pytest.mark.parametrize(
"value, mime_type, source_url",
[
(b"test", "image/*", None),
(b"test", "image/jpg", None),
(b"test", "image/png", None),
(b"test", None, None),
(b"test", "image/*", "mock_url"),
]
)
def test_image_contract(self, value, mime_type, source_url):
image = Image(value, mime_type, source_url)
if mime_type is None:
mime_type = "image/*"
assert image._mime_type == mime_type
assert image._hash == "a94a8fe5"
assert image.to_base64() == "dGVzdA=="
assert image.to_base64(with_type=True) == f"data:{mime_type};base64,dGVzdA=="
assert image.to_base64(with_type=True, dict_type=True) == {f"data:{mime_type};base64": "dGVzdA=="}
assert bytes(image) == value
assert image.source_url == source_url
assert str(image) == "Image(a94a8fe5)"
assert repr(image) == "Image(a94a8fe5)"
assert image.serialize() == "Image(a94a8fe5)"
assert image.serialize(lambda x: x.to_base64()) == "dGVzdA=="
@pytest.mark.parametrize(
"value, mime_type, source_url",
[
(b"test", "image/*", None),
(b"test", "image/jpg", None),
(b"test", "image/png", None),
(b"test", "image/*", "mock_url"),
]
)
def test_pfbytes_contract(self, value, mime_type, source_url):
pfBytes = PFBytes(value, mime_type, source_url)
assert pfBytes._mime_type == mime_type
assert pfBytes._hash == "a94a8fe5"
assert pfBytes.to_base64() == "dGVzdA=="
assert pfBytes.to_base64(with_type=True) == f"data:{mime_type};base64,dGVzdA=="
assert pfBytes.to_base64(with_type=True, dict_type=True) == {f"data:{mime_type};base64": "dGVzdA=="}
assert bytes(pfBytes) == value
assert pfBytes.source_url == source_url
| promptflow/src/promptflow/tests/executor/unittests/contracts/test_multimedia.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/contracts/test_multimedia.py",
"repo_id": "promptflow",
"token_count": 1009
} | 50 |
import pytest
from langchain.schema import AgentAction, AgentFinish
from promptflow.integrations.langchain import LangChainEventType, PromptFlowCallbackHandler
@pytest.mark.unittest
class TestLangchain:
def get_handler(self):
class MockTracer():
def __init__(self):
self._trace_stack = []
def _push(self, trace):
self._trace_stack.append(trace)
def _pop(self, output=None, error=None):
self._trace_stack.pop()
handler = PromptFlowCallbackHandler()
handler._tracer = MockTracer()
return handler
def test_langchain_traces(self):
handler = self.get_handler()
handler.on_agent_action(action=AgentAction("test_agent_name", "test", "test"))
handler.on_tool_start(serialized={"name": "test_tool_name"}, input_str="test")
handler.on_chain_start(serialized={"id": ["test_chain_name"]}, inputs={"test": "test"})
handler.on_llm_start(serialized={"test": "test"}, prompts=["test"])
assert handler._events_stack == [
LangChainEventType.AGENT,
LangChainEventType.TOOL,
LangChainEventType.CHAIN,
LangChainEventType.LLM
]
assert len(handler._tracer._trace_stack) == 4
assert handler._tracer._trace_stack[0].name == "test_agent_name"
assert handler._tracer._trace_stack[1].name == "test_tool_name"
assert handler._tracer._trace_stack[2].name == "test_chain_name"
assert handler._tracer._trace_stack[3].name == "LLM" # The default name
handler.on_llm_error(error=None)
handler.on_chain_error(error=None)
handler.on_tool_error(error=None)
handler.on_agent_finish(finish=AgentFinish({"test": "test"}, "test"))
assert len(handler._events_stack) == 0
assert len(handler._tracer._trace_stack) == 0
def test_langchain_traces_with_unpaired_events(self):
handler = self.get_handler()
handler.on_tool_start(serialized={"test": "test"}, input_str="test")
# Missing on_chain_start
# Missing on_llm_start
assert len(handler._tracer._trace_stack) == 1
handler.on_llm_end(response=None)
handler.on_chain_end(outputs={"test": "test"})
assert len(handler._tracer._trace_stack) == 1
handler.on_tool_end(output="test")
assert len(handler._events_stack) == 0
assert len(handler._tracer._trace_stack) == 0
handler = self.get_handler()
handler.on_tool_start(serialized={"test": "test"}, input_str="test")
handler.on_chain_start(serialized={"test": "test"}, inputs={"test": "test"})
handler.on_llm_start(serialized={"test": "test"}, prompts=["test"])
assert len(handler._tracer._trace_stack) == 3
# Missing on_chain_end
# Missing on_llm_end
handler.on_tool_end(output="test")
assert len(handler._events_stack) == 0
assert len(handler._tracer._trace_stack) == 0
| promptflow/src/promptflow/tests/executor/unittests/integrations/test_langchain.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/integrations/test_langchain.py",
"repo_id": "promptflow",
"token_count": 1289
} | 51 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import pydash
import pytest
from promptflow._sdk.entities._connection import _Connection
from promptflow.connections import AzureOpenAIConnection, CustomConnection
from promptflow.contracts.types import Secret
from .._azure_utils import DEFAULT_TEST_TIMEOUT, PYTEST_TIMEOUT_METHOD
@pytest.fixture
def connection_ops(pf):
return pf._connections
@pytest.mark.timeout(timeout=DEFAULT_TEST_TIMEOUT, method=PYTEST_TIMEOUT_METHOD)
@pytest.mark.e2etest
@pytest.mark.usefixtures("vcr_recording")
class TestConnectionOperations:
@pytest.mark.skip(reason="Skip to avoid flooded connections in workspace.")
def test_connection_get_create_delete(self, connection_ops):
from promptflow.azure._restclient.flow_service_caller import FlowRequestException
connection = _Connection(
name="test_connection_1",
type="AzureOpenAI",
configs=AzureOpenAIConnection(
api_key=Secret("test_key"),
api_base="test_base",
api_type="azure",
api_version="2023-07-01-preview",
),
)
try:
result = connection_ops.get(name=connection.name)
except FlowRequestException:
result = connection_ops.create_or_update(connection)
config_dict = pydash.omit(result._to_dict(), "configs.api_key")
assert config_dict == {
"name": "test_connection_1",
"connection_type": "AzureOpenAI",
"connection_scope": "User",
"configs": {"api_base": "test_base", "api_type": "azure", "api_version": "2023-07-01-preview"},
}
# soft delete
connection_ops.delete(name=connection.name)
@pytest.mark.skip(reason="Skip to avoid flooded connections in workspace.")
def test_custom_connection_create(self, connection_ops):
from promptflow.azure._restclient.flow_service_caller import FlowRequestException
connection = _Connection(
name="test_connection_2", type="Custom", custom_configs=CustomConnection(a="1", b=Secret("2"))
)
try:
result = connection_ops.get(name=connection.name)
except FlowRequestException:
result = connection_ops.create_or_update(connection)
config_dict = pydash.omit(result._to_dict(), "custom_configs")
assert config_dict == {"connection_scope": "User", "connection_type": "Custom", "name": "test_connection_2"}
# soft delete
connection_ops.delete(name=connection.name)
def test_list_connection_spec(self, connection_ops):
result = {v.connection_type: v._to_dict() for v in connection_ops.list_connection_specs()}
# Assert custom keys type
assert "Custom" in result
assert result["Custom"] == {
"module": "promptflow.connections",
"connection_type": "Custom",
"flow_value_type": "CustomConnection",
"config_specs": [],
}
# assert strong type
assert "AzureOpenAI" in result
aoai_config_specs = result["AzureOpenAI"]["config_specs"]
for config_dict in aoai_config_specs:
if config_dict["name"] == "api_version":
del config_dict["default_value"]
expected_config_specs = [
{"name": "api_key", "display_name": "API key", "config_value_type": "Secret", "is_optional": False},
{"name": "api_base", "display_name": "API base", "config_value_type": "String", "is_optional": False},
{
"name": "api_type",
"display_name": "API type",
"config_value_type": "String",
"default_value": "azure",
"is_optional": False,
},
{
"name": "api_version",
"display_name": "API version",
"config_value_type": "String",
"is_optional": False,
},
]
for spec in expected_config_specs:
assert spec in result["AzureOpenAI"]["config_specs"]
def test_get_connection(self, connection_ops):
# Note: No secrets will be returned by MT api
result = connection_ops.get(name="azure_open_ai_connection")
assert (
result._to_dict().items()
> {
"api_type": "azure",
"module": "promptflow.connections",
"name": "azure_open_ai_connection",
}.items()
)
result = connection_ops.get(name="custom_connection")
assert (
result._to_dict().items()
> {
"name": "custom_connection",
"module": "promptflow.connections",
"configs": {},
"secrets": {},
}.items()
)
| promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_connection_operations.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_connection_operations.py",
"repo_id": "promptflow",
"token_count": 2223
} | 52 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
import pytest
from promptflow._sdk._configuration import ConfigFileNotFound, Configuration, InvalidConfigFile
from promptflow._utils.context_utils import _change_working_dir
AZUREML_RESOURCE_PROVIDER = "Microsoft.MachineLearningServices"
RESOURCE_ID_FORMAT = "/subscriptions/{}/resourceGroups/{}/providers/{}/workspaces/{}"
CONFIG_DATA_ROOT = Path(__file__).parent.parent.parent / "test_configs" / "configs"
@pytest.fixture
def config():
return Configuration.get_instance()
@pytest.mark.unittest
class TestConfig:
def test_get_workspace_from_config(self):
# New instance instead of get_instance() to avoid side effect
conf = Configuration(overrides={"connection.provider": "azureml"})
# Test config within flow folder
target_folder = CONFIG_DATA_ROOT / "mock_flow1"
with _change_working_dir(target_folder):
config1 = conf.get_connection_provider()
assert config1 == "azureml:" + RESOURCE_ID_FORMAT.format("sub1", "rg1", AZUREML_RESOURCE_PROVIDER, "ws1")
# Test config using flow parent folder
target_folder = CONFIG_DATA_ROOT / "mock_flow2"
with _change_working_dir(target_folder):
config2 = conf.get_connection_provider()
assert config2 == "azureml:" + RESOURCE_ID_FORMAT.format(
"sub_default", "rg_default", AZUREML_RESOURCE_PROVIDER, "ws_default"
)
# Test config not found
with pytest.raises(ConfigFileNotFound):
Configuration._get_workspace_from_config(path=CONFIG_DATA_ROOT.parent)
# Test empty config
target_folder = CONFIG_DATA_ROOT / "mock_flow_empty_config"
with pytest.raises(InvalidConfigFile):
with _change_working_dir(target_folder):
conf.get_connection_provider()
| promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_config.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_config.py",
"repo_id": "promptflow",
"token_count": 735
} | 53 |
import importlib
import importlib.util
import json
import logging
import multiprocessing
import os
import os.path
import shutil
import sys
import tempfile
import uuid
from pathlib import Path
from tempfile import mkdtemp
from typing import Dict, List
from unittest.mock import patch
import mock
import pytest
from promptflow._cli._pf.entry import main
from promptflow._constants import PF_USER_AGENT
from promptflow._core.operation_context import OperationContext
from promptflow._sdk._constants import LOGGER_NAME, SCRUBBED_VALUE, ExperimentStatus
from promptflow._sdk._errors import RunNotFoundError
from promptflow._sdk._utils import ClientUserAgentUtil, setup_user_agent_to_operation_context
from promptflow._sdk.operations._local_storage_operations import LocalStorageOperations
from promptflow._sdk.operations._run_operations import RunOperations
from promptflow._utils.context_utils import _change_working_dir
from promptflow._utils.utils import environment_variable_overwrite, parse_ua_to_dict
from promptflow._utils.yaml_utils import dump_yaml, load_yaml
from promptflow.exceptions import UserErrorException
FLOWS_DIR = "./tests/test_configs/flows"
EXPERIMENT_DIR = "./tests/test_configs/experiments"
RUNS_DIR = "./tests/test_configs/runs"
CONNECTIONS_DIR = "./tests/test_configs/connections"
DATAS_DIR = "./tests/test_configs/datas"
TOOL_ROOT = "./tests/test_configs/tools"
TARGET_URL = "https://www.youtube.com/watch?v=o5ZQyXaAv1g"
# TODO: move this to a shared utility module
def run_pf_command(*args, cwd=None):
"""Run a pf command with the given arguments and working directory.
There have been some unknown issues in using subprocess on CI, so we use this function instead, which will also
provide better debugging experience.
"""
origin_argv, origin_cwd = sys.argv, os.path.abspath(os.curdir)
try:
sys.argv = ["pf"] + list(args)
if cwd:
os.chdir(cwd)
main()
finally:
sys.argv = origin_argv
os.chdir(origin_cwd)
def run_batch(local_client, line_timeout_seconds, timeout_index=None):
os.environ["PF_LINE_TIMEOUT_SEC"] = line_timeout_seconds
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/simple_flow_with_ten_inputs",
"--data",
f"{FLOWS_DIR}/simple_flow_with_ten_inputs/data.jsonl",
"--name",
run_id,
)
run = local_client.runs.get(name=run_id)
local_storage = LocalStorageOperations(run)
detail = local_storage.load_detail()
flow_runs_list = detail["flow_runs"]
for i, flow_run in enumerate(flow_runs_list):
if i == timeout_index:
assert flow_run["status"] == "Failed"
assert flow_run["error"]["message"] == f"Line {i} execution timeout for exceeding 54 seconds"
assert flow_run["error"]["code"] == "UserError"
assert flow_run["error"]["innerError"]["code"] == "LineExecutionTimeoutError"
else:
assert flow_run["status"] == "Completed"
os.environ.pop("PF_LINE_TIMEOUT_SEC")
@pytest.mark.usefixtures(
"use_secrets_config_file", "recording_injection", "setup_local_connection", "install_custom_tool_pkg"
)
@pytest.mark.cli_test
@pytest.mark.e2etest
class TestCli:
def test_pf_version(self, capfd):
run_pf_command("--version")
out, _ = capfd.readouterr()
assert "0.0.1\n" in out
def test_basic_flow_run(self, capfd) -> None:
# fetch std out
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--name",
str(uuid.uuid4()),
)
out, _ = capfd.readouterr()
assert "Completed" in out
def test_basic_flow_run_batch_and_eval(self, capfd) -> None:
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--name",
run_id,
)
out, _ = capfd.readouterr()
assert "Completed" in out
# Check the CLI works correctly when the parameter is surrounded by quotation, as below shown:
# --param "key=value" key="value"
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/classification_accuracy_evaluation",
"--column-mapping",
"'groundtruth=${data.answer}'",
"prediction='${run.outputs.category}'",
"variant_id=${data.variant_id}",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--run",
run_id,
)
out, _ = capfd.readouterr()
assert "Completed" in out
def test_submit_run_with_yaml(self, capfd):
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--file",
"./sample_bulk_run.yaml",
"--name",
run_id,
cwd=f"{RUNS_DIR}",
)
out, _ = capfd.readouterr()
assert "Completed" in out
run_pf_command(
"run",
"create",
"--file",
"./sample_eval_run.yaml",
"--run",
run_id,
cwd=f"{RUNS_DIR}",
)
out, _ = capfd.readouterr()
assert "Completed" in out
def test_submit_batch_variant(self, local_client):
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--name",
run_id,
"--variant",
"${summarize_text_content.variant_0}",
)
run = local_client.runs.get(name=run_id)
local_storage = LocalStorageOperations(run)
detail = local_storage.load_detail()
tuning_node = next((x for x in detail["node_runs"] if x["node"] == "summarize_text_content"), None)
# used variant_0 config, defaults using variant_1
assert tuning_node["inputs"]["temperature"] == 0.2
def test_environment_variable_overwrite(self, local_client, local_aoai_connection):
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--name",
run_id,
"--flow",
f"{FLOWS_DIR}/print_env_var",
"--data",
f"{DATAS_DIR}/env_var_names.jsonl",
"--environment-variables",
"API_BASE=${azure_open_ai_connection.api_base}",
)
outputs = local_client.runs._get_outputs(run=run_id)
assert outputs["output"][0] == local_aoai_connection.api_base
def test_connection_overwrite(self, local_alt_aoai_connection, capfd):
# CLi command will fail with SystemExit
with pytest.raises(SystemExit):
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--connection",
"classify_with_llm.connection=not_exist",
)
out, _ = capfd.readouterr()
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--connection",
"classify_with_llm.connection=new_ai_connection",
)
out, _ = capfd.readouterr()
assert "Completed" in out
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--connection",
"classify_with_llm.model=new_model",
)
out, _ = capfd.readouterr()
assert "Completed" in out
def test_create_with_set(self, local_client):
run_id = str(uuid.uuid4())
display_name = "test_run"
description = "test description"
run_pf_command(
"run",
"create",
"--name",
run_id,
"--flow",
f"{FLOWS_DIR}/print_env_var",
"--data",
f"{DATAS_DIR}/env_var_names.jsonl",
"--environment-variables",
"API_BASE=${azure_open_ai_connection.api_base}",
"--set",
f"display_name={display_name}",
"tags.key=val",
f"description={description}",
)
run = local_client.runs.get(run_id)
assert display_name in run.display_name
assert run.tags == {"key": "val"}
assert run.description == description
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--file",
"./sample_bulk_run.yaml",
"--name",
run_id,
"--set",
f"display_name={display_name}",
"tags.key=val",
f"description={description}",
cwd=f"{RUNS_DIR}",
)
assert display_name in run.display_name
assert run.tags == {"key": "val"}
assert run.description == description
def test_pf_flow_test(self):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / "flow.output.json"
assert output_path.exists()
log_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / "flow.log"
with open(log_path, "r") as f:
previous_log_content = f.read()
# Test without input
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / "flow.output.json"
assert output_path.exists()
log_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / "flow.log"
with open(log_path, "r") as f:
log_content = f.read()
assert previous_log_content not in log_content
def test_pf_flow_test_with_non_english_input_output(self, capsys):
question = "什么是 chat gpt"
run_pf_command("flow", "test", "--flow", f"{FLOWS_DIR}/chat_flow", "--inputs", f'question="{question}"')
stdout, _ = capsys.readouterr()
output_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "flow.output.json"
assert output_path.exists()
with open(output_path, "r", encoding="utf-8") as f:
outputs = json.load(f)
assert outputs["answer"] in json.loads(stdout)["answer"]
detail_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "flow.detail.json"
assert detail_path.exists()
with open(detail_path, "r", encoding="utf-8") as f:
detail = json.load(f)
assert detail["flow_runs"][0]["inputs"]["question"] == question
assert detail["flow_runs"][0]["output"]["answer"] == outputs["answer"]
def test_pf_flow_with_variant(self, capsys):
with tempfile.TemporaryDirectory() as temp_dir:
shutil.copytree((Path(FLOWS_DIR) / "web_classification").resolve().as_posix(), temp_dir, dirs_exist_ok=True)
dag = Path(temp_dir) / "flow.dag.yaml"
flow_dict = load_yaml(dag)
node_name = "summarize_text_content"
node = next(filter(lambda item: item["name"] == node_name, flow_dict["nodes"]))
flow_dict["nodes"].remove(node)
flow_dict["nodes"].append({"name": node_name, "use_variants": True})
with open(Path(temp_dir) / "flow.dag.yaml", "w") as f:
dump_yaml(flow_dict, f)
run_pf_command(
"flow",
"test",
"--flow",
temp_dir,
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
)
output_path = Path(temp_dir) / ".promptflow" / "flow.output.json"
assert output_path.exists()
run_pf_command(
"flow",
"test",
"--flow",
temp_dir,
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
"--variant",
"'${summarize_text_content.variant_1}'",
)
output_path = Path(temp_dir) / ".promptflow" / "flow-summarize_text_content-variant_1.output.json"
assert output_path.exists()
# Test flow dag with invalid format
node_name = flow_dict["nodes"][0]["name"]
flow_dict["nodes"][0]["use_variants"] = True
flow_dict["node_variants"][node_name] = {
"default_variant_id": "invalid_variant",
"variants": [{"variant_0": {}}],
}
with open(Path(temp_dir) / "flow.dag.yaml", "w") as f:
dump_yaml(flow_dict, f)
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
temp_dir,
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
"--variant",
"${summarize_text_content.variant_1}",
)
outerr = capsys.readouterr()
assert f"Cannot find the variant invalid_variant for {node_name}." in outerr.out
def test_pf_flow_test_single_node(self):
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"inputs.url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
# Test node with node reference input
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
'input_str={"category": "App", "evidence": "URL"}',
"--node",
"convert_to_dict",
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / "flow-convert_to_dict.node.detail.json"
assert output_path.exists()
# Test without input
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--node",
node_name,
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
# Test with input file
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--node",
node_name,
"--input",
f"{FLOWS_DIR}/web_classification/{node_name}_input.jsonl",
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
# Test with input file
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--node",
node_name,
"--inputs",
f"{FLOWS_DIR}/web_classification/{node_name}_input.jsonl",
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
def test_pf_flow_test_debug_single_node(self):
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"inputs.url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
"--debug",
)
# Debug node with node reference input
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
'classify_with_llm.output={"category": "App", "evidence": "URL"}',
"--node",
"convert_to_dict",
"--debug",
)
def test_pf_flow_test_with_additional_includes(self):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_additional_include",
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
)
output_path = (
Path(FLOWS_DIR) / "web_classification_with_additional_include" / ".promptflow" / "flow.output.json"
)
assert output_path.exists()
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_additional_include",
"--inputs",
"inputs.url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
)
def test_pf_flow_test_with_symbolic(self, prepare_symbolic_flow):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_symbolic",
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
)
output_path = Path(FLOWS_DIR) / "web_classification_with_symbolic" / ".promptflow" / "flow.output.json"
assert output_path.exists()
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_symbolic",
"--inputs",
"inputs.url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
)
@pytest.mark.parametrize(
"flow_folder_name, env_key, except_value",
[
pytest.param(
"print_env_var",
"API_BASE",
"${azure_open_ai_connection.api_base}",
id="TestFlowWithEnvironmentVariables",
),
pytest.param(
"flow_with_environment_variables",
"env1",
"2",
id="LoadEnvVariablesWithoutOverridesInYaml",
),
],
)
def test_flow_test_with_environment_variable(self, flow_folder_name, env_key, except_value, local_client):
from promptflow._sdk._submitter.utils import SubmitterHelper
def validate_stdout(detail_path):
with open(detail_path, "r") as f:
details = json.load(f)
assert details["node_runs"][0]["logs"]["stdout"]
env = {env_key: except_value}
SubmitterHelper.resolve_environment_variables(env, local_client)
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/{flow_folder_name}",
"--inputs",
f"key={env_key}",
"--environment-variables",
"API_BASE=${azure_open_ai_connection.api_base}",
)
with open(Path(FLOWS_DIR) / flow_folder_name / ".promptflow" / "flow.output.json", "r") as f:
outputs = json.load(f)
assert outputs["output"] == env[env_key]
validate_stdout(Path(FLOWS_DIR) / flow_folder_name / ".promptflow" / "flow.detail.json")
# Test log contains user printed outputs
log_path = Path(FLOWS_DIR) / flow_folder_name / ".promptflow" / "flow.log"
with open(log_path, "r") as f:
log_content = f.read()
assert env[env_key] in log_content
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/{flow_folder_name}",
"--inputs",
f"inputs.key={env_key}",
"--environment-variables",
"API_BASE=${azure_open_ai_connection.api_base}",
"--node",
"print_env",
)
with open(Path(FLOWS_DIR) / flow_folder_name / ".promptflow" / "flow-print_env.node.output.json", "r") as f:
outputs = json.load(f)
assert outputs["value"] == env[env_key]
validate_stdout(Path(FLOWS_DIR) / flow_folder_name / ".promptflow" / "flow-print_env.node.detail.json")
def _validate_requirement(self, flow_path):
with open(flow_path) as f:
flow_dict = load_yaml(f)
assert flow_dict.get("environment", {}).get("python_requirements_txt", None)
assert (flow_path.parent / flow_dict["environment"]["python_requirements_txt"]).exists()
def test_flow_with_exception(self, capsys):
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_exception",
)
captured = capsys.readouterr()
assert "Execution failure in 'convert_to_dict': (Exception) mock exception" in captured.out
output_path = Path(FLOWS_DIR) / "web_classification_with_exception" / ".promptflow" / "flow.detail.json"
assert output_path.exists()
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification_with_exception",
"--inputs",
'classify_with_llm.output={"category": "App", "evidence": "URL"}',
"--node",
"convert_to_dict",
)
captured = capsys.readouterr()
assert "convert_to_dict.py" in captured.out
assert "mock exception" in captured.out
output_path = (
Path(FLOWS_DIR)
/ "web_classification_with_exception"
/ ".promptflow"
/ "flow-convert_to_dict.node.detail.json"
)
assert output_path.exists()
def test_init_eval_flow(self):
temp_dir = mkdtemp()
with _change_working_dir(temp_dir):
flow_name = "eval_flow"
# Init standard flow
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"evaluation",
)
ignore_file_path = Path(temp_dir) / flow_name / ".gitignore"
assert ignore_file_path.exists()
ignore_file_path.unlink()
# TODO remove variant_id & line_number in evaluate template
run_pf_command("flow", "test", "--flow", flow_name, "--inputs", "groundtruth=App", "prediction=App")
self._validate_requirement(Path(temp_dir) / flow_name / "flow.dag.yaml")
def test_init_chat_flow(self):
temp_dir = mkdtemp()
with _change_working_dir(temp_dir):
flow_name = "chat_flow"
# Init standard flow
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"chat",
)
ignore_file_path = Path(temp_dir) / flow_name / ".gitignore"
assert ignore_file_path.exists()
ignore_file_path.unlink()
# Only azure openai connection in test env
with open(Path(temp_dir) / flow_name / "flow.dag.yaml", "r") as f:
flow_dict = load_yaml(f)
flow_dict["nodes"][0]["provider"] = "AzureOpenAI"
flow_dict["nodes"][0]["connection"] = "azure_open_ai_connection"
with open(Path(temp_dir) / flow_name / "flow.dag.yaml", "w") as f:
dump_yaml(flow_dict, f)
run_pf_command("flow", "test", "--flow", flow_name, "--inputs", "question=hi")
self._validate_requirement(Path(temp_dir) / flow_name / "flow.dag.yaml")
def test_flow_init(self, capsys):
temp_dir = mkdtemp()
with _change_working_dir(temp_dir):
flow_name = "standard_flow"
# Init standard flow
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"standard",
)
self._validate_requirement(Path(temp_dir) / flow_name / "flow.dag.yaml")
ignore_file_path = Path(temp_dir) / flow_name / ".gitignore"
requirements_file_path = Path(temp_dir) / flow_name / "requirements.txt"
assert ignore_file_path.exists()
assert requirements_file_path.exists()
ignore_file_path.unlink()
run_pf_command("flow", "test", "--flow", flow_name, "--inputs", "text=value")
jinja_name = "input1"
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--entry",
"hello.py",
"--function",
"my_python_tool",
"--prompt-template",
f"{jinja_name}=hello.jinja2",
)
self._validate_requirement(Path(temp_dir) / flow_name / "flow.dag.yaml")
assert ignore_file_path.exists()
assert requirements_file_path.exists()
with open(Path(temp_dir) / flow_name / ".promptflow" / "flow.tools.json", "r") as f:
tools_dict = json.load(f)["code"]
assert jinja_name in tools_dict
assert len(tools_dict[jinja_name]["inputs"]) == 1
assert tools_dict[jinja_name]["inputs"]["text"]["type"] == ["string"]
assert tools_dict[jinja_name]["source"] == "hello.jinja2"
# Test prompt-template doesn't exist
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--entry",
"hello.py",
"--function",
"my_python_tool",
"--prompt-template",
f"{jinja_name}={jinja_name}.jinja2",
)
self._validate_requirement(Path(temp_dir) / flow_name / "flow.dag.yaml")
assert (Path(temp_dir) / flow_name / f"{jinja_name}.jinja2").exists()
# Test template name doesn't exist in python function
jinja_name = "mock_jinja"
with pytest.raises(UserErrorException) as ex:
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--entry",
"hello.py",
"--function",
"my_python_tool",
"--prompt-template",
f"{jinja_name}={jinja_name}.jinja2",
)
assert f"Template parameter {jinja_name} doesn't find in python function arguments." in str(ex.value)
with pytest.raises(SystemExit):
run_pf_command("flow", "init")
_, err = capsys.readouterr()
assert "pf flow init: error: the following arguments are required: --flow" in err
def test_flow_init_intent_copilot(self):
flow_path = os.path.join(FLOWS_DIR, "intent-copilot")
run_pf_command(
"flow",
"init",
"--flow",
flow_path,
"--entry",
"intent.py",
"--function",
"extract_intent",
"--prompt-template",
"chat_prompt=user_intent_zero_shot.jinja2",
)
with open(Path(flow_path) / "flow.dag.yaml", "r") as f:
flow_dict = load_yaml(f)
assert "chat_history" in flow_dict["inputs"]
assert "customer_info" in flow_dict["inputs"]
chat_prompt_node = next(filter(lambda item: item["name"] == "chat_prompt", flow_dict["nodes"]))
assert "chat_history" in chat_prompt_node["inputs"]
assert "customer_info" in chat_prompt_node["inputs"]
def test_flow_init_with_connection_and_deployment(self):
def check_connection_and_deployment(flow_folder, connection, deployment):
with open(Path(flow_folder) / "flow.dag.yaml", "r") as f:
flow_dict = load_yaml(f)
assert flow_dict["nodes"][0]["inputs"]["deployment_name"] == deployment
assert flow_dict["nodes"][0]["connection"] == connection
temp_dir = mkdtemp()
with _change_working_dir(temp_dir):
flow_name = "chat_flow"
flow_folder = Path(temp_dir) / flow_name
# When configure local connection provider, init chat flow without connection and deployment.
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"chat",
)
# Assert connection files created
assert (flow_folder / "azure_openai.yaml").exists()
assert (flow_folder / "openai.yaml").exists()
# When configure local connection provider, init chat flow with connection and deployment.
connection = "connection_name"
deployment = "deployment_name"
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"chat",
"--connection",
connection,
"--deployment",
deployment,
"--yes",
)
# Assert connection files created and the connection/deployment is set in flow.dag.yaml
check_connection_and_deployment(flow_folder, connection=connection, deployment=deployment)
connection_files = [flow_folder / "azure_openai.yaml", flow_folder / "openai.yaml"]
for file in connection_files:
assert file.exists()
with open(file, "r") as f:
connection_dict = load_yaml(f)
assert connection_dict["name"] == connection
shutil.rmtree(flow_folder)
target = "promptflow._sdk._pf_client.Configuration.get_connection_provider"
with mock.patch(target) as mocked:
mocked.return_value = "azureml:xx"
# When configure azure connection provider, init chat flow without connection and deployment.
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"chat",
"--yes",
)
# Assert connection files not created.
assert not (flow_folder / "azure_openai.yaml").exists()
assert not (flow_folder / "openai.yaml").exists()
# When configure azure connection provider, init chat flow with connection and deployment.
connection = "connection_name"
deployment = "deployment_name"
run_pf_command(
"flow",
"init",
"--flow",
flow_name,
"--type",
"chat",
"--connection",
connection,
"--deployment",
deployment,
"--yes",
)
# Assert connection files not created and the connection/deployment is set in flow.dag.yaml
check_connection_and_deployment(flow_folder, connection=connection, deployment=deployment)
assert not (flow_folder / "azure_openai.yaml").exists()
assert not (flow_folder / "openai.yaml").exists()
def test_flow_chat(self, monkeypatch, capsys):
chat_list = ["hi", "what is chat gpt?"]
def mock_input(*args, **kwargs):
if chat_list:
return chat_list.pop()
else:
raise KeyboardInterrupt()
monkeypatch.setattr("builtins.input", mock_input)
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow",
"--interactive",
)
output_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "chat.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "chat.detail.json"
assert detail_path.exists()
# Test streaming output
chat_list = ["hi", "what is chat gpt?"]
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_stream_output",
"--interactive",
)
output_path = Path(FLOWS_DIR) / "chat_flow_with_stream_output" / ".promptflow" / "chat.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow_with_stream_output" / ".promptflow" / "chat.detail.json"
assert detail_path.exists()
chat_list = ["hi", "what is chat gpt?"]
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_python_node_streaming_output",
"--interactive",
)
output_path = Path(FLOWS_DIR) / "chat_flow_with_stream_output" / ".promptflow" / "chat.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow_with_stream_output" / ".promptflow" / "chat.detail.json"
assert detail_path.exists()
# Validate terminal output
chat_list = ["hi", "what is chat gpt?"]
run_pf_command("flow", "test", "--flow", f"{FLOWS_DIR}/chat_flow", "--interactive", "--verbose")
outerr = capsys.readouterr()
# Check node output
assert "chat_node:" in outerr.out
assert "show_answer:" in outerr.out
assert "[show_answer]: print:" in outerr.out
chat_list = ["hi", "what is chat gpt?"]
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_exception",
"--interactive",
)
outerr = capsys.readouterr()
assert "Execution failure in 'show_answer': (Exception) mock exception" in outerr.out
output_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "chat.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow" / ".promptflow" / "chat.detail.json"
assert detail_path.exists()
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_multi_output_invalid",
"--interactive",
)
outerr = capsys.readouterr()
assert "chat flow does not support multiple chat outputs" in outerr.out
def test_flow_test_with_default_chat_history(self):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_default_history",
)
output_path = Path(FLOWS_DIR) / "chat_flow_with_default_history" / ".promptflow" / "flow.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow_with_default_history" / ".promptflow" / "flow.detail.json"
assert detail_path.exists()
with open(detail_path, "r") as f:
details = json.load(f)
expect_chat_history = [
{"inputs": {"question": "hi"}, "outputs": {"answer": "hi"}},
{"inputs": {"question": "who are you"}, "outputs": {"answer": "who are you"}},
]
assert details["flow_runs"][0]["inputs"]["chat_history"] == expect_chat_history
def test_flow_test_with_user_defined_chat_history(self, monkeypatch, capsys):
chat_list = ["hi", "what is chat gpt?"]
def mock_input(*args, **kwargs):
if chat_list:
return chat_list.pop()
else:
raise KeyboardInterrupt()
monkeypatch.setattr("builtins.input", mock_input)
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_with_defined_chat_history",
"--interactive",
)
output_path = Path(FLOWS_DIR) / "chat_flow_with_defined_chat_history" / ".promptflow" / "chat.output.json"
assert output_path.exists()
detail_path = Path(FLOWS_DIR) / "chat_flow_with_defined_chat_history" / ".promptflow" / "chat.detail.json"
assert detail_path.exists()
# Test is_chat_history is set False
with pytest.raises(SystemExit):
chat_list = ["hi", "what is chat gpt?"]
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/chat_flow_without_defined_chat_history",
"--interactive",
)
outerr = capsys.readouterr()
assert "chat_history is required in the inputs of chat flow" in outerr.out
@pytest.mark.parametrize(
"extra_args,expected_err",
[
pytest.param(
[],
"Required input(s) ['key'] are missing for \"flow\".",
id="missing_required_flow_inputs",
),
pytest.param(
["--node", "print_env"],
"Required input(s) ['key'] are missing for \"print_env\".",
id="missing_required_node_inputs",
),
],
)
def test_flow_test_inputs_missing(self, capsys, caplog, extra_args: List[str], expected_err: str):
with pytest.raises(SystemExit):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/print_env_var",
"--environment-variables",
"API_BASE=${azure_open_ai_connection.api_base}",
*extra_args,
)
stdout, _ = capsys.readouterr()
assert expected_err in stdout
@pytest.mark.parametrize(
"extra_args,expected_inputs,expected_log_prefixes",
[
pytest.param(
[
"--inputs",
f"url={TARGET_URL}",
"answer=Channel",
"evidence=Url",
],
[
{"answer": "Channel", "evidence": "Url"},
{"url": TARGET_URL, "answer": "Channel", "evidence": "Url"},
],
[
"Unknown input(s) of flow: ",
"flow input(s): ",
],
id="unknown_flow_inputs",
),
pytest.param(
[
"--inputs",
f"inputs.url={TARGET_URL}",
"unknown_input=unknown_val",
"--node",
"fetch_text_content_from_url",
],
[
{"unknown_input": "unknown_val"},
{"fetch_url": TARGET_URL, "unknown_input": "unknown_val"},
],
[
"Unknown input(s) of fetch_text_content_from_url: ",
"fetch_text_content_from_url input(s): ",
],
id="unknown_inputs_node",
),
],
)
def test_flow_test_inputs_unknown(
self, caplog, extra_args: List[str], expected_inputs: List[Dict[str, str]], expected_log_prefixes: List[str]
):
logger = logging.getLogger(LOGGER_NAME)
logger.propagate = True
def validate_log(log_msg, prefix, expect_dict):
log_inputs = json.loads(log_msg[len(prefix) :].replace("'", '"'))
assert prefix in log_msg
assert expect_dict == log_inputs
with caplog.at_level(level=logging.INFO, logger=LOGGER_NAME):
run_pf_command("flow", "test", "--flow", f"{FLOWS_DIR}/web_classification", *extra_args)
for log, expected_input, expected_log_prefix in zip(caplog.records, expected_inputs, expected_log_prefixes):
validate_log(
prefix=expected_log_prefix,
log_msg=log.message,
expect_dict=expected_input,
)
def test_flow_build(self):
source = f"{FLOWS_DIR}/web_classification_with_additional_include/flow.dag.yaml"
output_path = "dist"
def get_node_settings(_flow_dag_path: Path):
flow_dag = load_yaml(_flow_dag_path)
target_node = next(filter(lambda x: x["name"] == "summarize_text_content", flow_dag["nodes"]))
target_node.pop("name")
return target_node
try:
run_pf_command(
"flow",
"build",
"--source",
source,
"--output",
output_path,
"--format",
"docker",
"--variant",
"${summarize_text_content.variant_0}",
)
new_flow_dag_path = Path(output_path, "flow", "flow.dag.yaml")
flow_dag = load_yaml(Path(source))
assert (
get_node_settings(new_flow_dag_path)
== flow_dag["node_variants"]["summarize_text_content"]["variants"]["variant_0"]["node"]
)
assert get_node_settings(Path(source)) != get_node_settings(new_flow_dag_path)
connection_path = Path(output_path, "connections", "azure_open_ai_connection.yaml")
assert connection_path.exists()
finally:
shutil.rmtree(output_path, ignore_errors=True)
def test_flow_build_with_ua(self):
with pytest.raises(UserErrorException) as e:
run_pf_command(
"flow",
"build",
"--source",
"not_exist",
"--output",
"dist",
"--format",
"docker",
"--user-agent",
"test/1.0.0",
)
assert "not exist" in str(e.value)
@pytest.mark.parametrize(
"file_name, expected, update_item",
[
(
"azure_openai_connection.yaml",
{
"module": "promptflow.connections",
"type": "azure_open_ai",
"api_type": "azure",
"api_version": "2023-07-01-preview",
"api_key": SCRUBBED_VALUE,
"api_base": "aoai-api-endpoint",
},
("api_base", "new_value"),
),
(
"custom_connection.yaml",
{
"module": "promptflow.connections",
"type": "custom",
"configs": {"key1": "test1"},
"secrets": {"key2": SCRUBBED_VALUE},
},
("configs.key1", "new_value"),
),
(
"custom_strong_type_connection.yaml",
{
"module": "promptflow.connections",
"type": "custom",
"configs": {
"api_base": "This is my first connection.",
"promptflow.connection.custom_type": "MyFirstConnection",
"promptflow.connection.module": "my_tool_package.connections",
"promptflow.connection.package": "test-custom-tools",
"promptflow.connection.package_version": "0.0.2",
},
"secrets": {"api_key": SCRUBBED_VALUE},
},
("configs.api_base", "new_value"),
),
],
)
def test_connection_create_update(
self, install_custom_tool_pkg, file_name, expected, update_item, capfd, local_client
):
name = f"Connection_{str(uuid.uuid4())[:4]}"
run_pf_command("connection", "create", "--file", f"{CONNECTIONS_DIR}/{file_name}", "--name", f"{name}")
out, err = capfd.readouterr()
# Assert in to skip some datetime fields
assert expected.items() <= json.loads(out).items()
# Update with --set
update_key, update_value = update_item
run_pf_command("connection", "update", "--set", f"{update_key}={update_value}", "--name", f"{name}")
out, _ = capfd.readouterr()
assert update_value in out, f"expected updated value {update_value} not in {out}"
connection = local_client.connections.get(name)
# Assert secrets are not scrubbed
assert not any(v == SCRUBBED_VALUE for v in connection._secrets.values())
def test_input_with_dict_val(self, pf):
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--file",
"./input_with_dict_val.yaml",
"--name",
run_id,
cwd=f"{RUNS_DIR}",
)
outputs = pf.runs._get_outputs(run=run_id)
assert "dict" in outputs["output"][0]
def test_visualize_ignore_space(self) -> None:
names = ["a,b,c,d", "a, b, c, d", "a, b , c, d"]
groundtruth = ["a", "b", "c", "d"]
def mocked_visualize(*args, **kwargs):
runs = args[0]
assert runs == groundtruth
with patch.object(RunOperations, "visualize") as mock_visualize:
mock_visualize.side_effect = mocked_visualize
for name in names:
run_pf_command(
"run",
"visualize",
"--names",
name,
)
def test_pf_run_with_stream_log(self, capfd):
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/flow_with_user_output",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--column-mapping",
"key=value",
"extra=${data.url}",
"--stream",
)
out, _ = capfd.readouterr()
# For Batch run, the executor uses bulk logger to print logs, and only prints the error log of the nodes.
existing_keywords = ["execution", "execution.bulk", "WARNING", "error log"]
non_existing_keywords = ["execution.flow", "user log"]
for keyword in existing_keywords:
assert keyword in out
for keyword in non_existing_keywords:
assert keyword not in out
def test_pf_run_no_stream_log(self, capfd):
# without --stream, logs will be in the run's log file
run_pf_command(
"run",
"create",
"--flow",
f"{FLOWS_DIR}/flow_with_user_output",
"--data",
f"{DATAS_DIR}/webClassification3.jsonl",
"--column-mapping",
"key=value",
"extra=${data.url}",
)
out, _ = capfd.readouterr()
assert "user log" not in out
assert "error log" not in out
# flow logs won't stream
assert "Executing node print_val. node run id:" not in out
# executor logs won't stream
assert "Node print_val completes." not in out
def test_format_cli_exception(self, capsys):
from promptflow._sdk.operations._connection_operations import ConnectionOperations
with patch.dict(os.environ, {"PROMPTFLOW_STRUCTURE_EXCEPTION_OUTPUT": "true"}):
with pytest.raises(SystemExit):
run_pf_command(
"connection",
"show",
"--name",
"invalid_connection_name",
)
outerr = capsys.readouterr()
assert outerr.err
error_msg = json.loads(outerr.err)
assert error_msg["code"] == "UserError"
assert error_msg["innerError"]["innerError"]["code"] == "ConnectionNotFoundError"
def mocked_connection_get(*args, **kwargs):
raise Exception("mock exception")
with patch.object(ConnectionOperations, "get") as mock_connection_get:
mock_connection_get.side_effect = mocked_connection_get
with pytest.raises(Exception):
run_pf_command(
"connection",
"show",
"--name",
"invalid_connection_name",
)
outerr = capsys.readouterr()
assert outerr.err
error_msg = json.loads(outerr.err)
assert error_msg["code"] == "SystemError"
with pytest.raises(SystemExit):
run_pf_command(
"connection",
"show",
"--name",
"invalid_connection_name",
)
outerr = capsys.readouterr()
assert not outerr.err
def test_tool_init(self, capsys):
with tempfile.TemporaryDirectory() as temp_dir:
package_name = "package_name"
func_name = "func_name"
run_pf_command("tool", "init", "--package", package_name, "--tool", func_name, cwd=temp_dir)
package_folder = Path(temp_dir) / package_name
sys.path.append(str(package_folder.absolute()))
assert (package_folder / package_name / f"{func_name}.py").exists()
assert (package_folder / package_name / "utils.py").exists()
assert (package_folder / package_name / "__init__.py").exists()
assert (package_folder / "setup.py").exists()
assert (package_folder / "README.md").exists()
spec = importlib.util.spec_from_file_location(
f"{package_name}.utils", package_folder / package_name / "utils.py"
)
utils = importlib.util.module_from_spec(spec)
spec.loader.exec_module(utils)
assert hasattr(utils, "list_package_tools")
tools_meta = utils.list_package_tools()
assert f"{package_name}.{func_name}.{func_name}" in tools_meta
meta = tools_meta[f"{package_name}.{func_name}.{func_name}"]
assert meta["function"] == func_name
assert meta["module"] == f"{package_name}.{func_name}"
assert meta["name"] == func_name
assert meta["description"] == f"This is {func_name} tool"
assert meta["type"] == "python"
# Invalid package/tool name
invalid_package_name = "123-package-name"
invalid_tool_name = "123_tool_name"
with pytest.raises(SystemExit):
run_pf_command("tool", "init", "--package", invalid_package_name, "--tool", func_name, cwd=temp_dir)
outerr = capsys.readouterr()
assert f"The package name {invalid_package_name} is a invalid identifier." in outerr.out
with pytest.raises(SystemExit):
run_pf_command("tool", "init", "--package", package_name, "--tool", invalid_tool_name, cwd=temp_dir)
outerr = capsys.readouterr()
assert f"The tool name {invalid_tool_name} is a invalid identifier." in outerr.out
with pytest.raises(SystemExit):
run_pf_command("tool", "init", "--tool", invalid_tool_name, cwd=temp_dir)
outerr = capsys.readouterr()
assert f"The tool name {invalid_tool_name} is a invalid identifier." in outerr.out
# Test init package tool with extra info
package_name = "tool_with_extra_info"
package_folder = Path(temp_dir) / package_name
package_folder.mkdir(exist_ok=True, parents=True)
manifest_file = package_folder / "MANIFEST.in"
mock_manifest_content = "include mock/path"
with open(manifest_file, "w") as f:
f.write(mock_manifest_content)
icon_path = Path(DATAS_DIR) / "logo.jpg"
category = "test_category"
tags = {"tag1": "value1", "tag2": "value2"}
run_pf_command(
"tool",
"init",
"--package",
package_name,
"--tool",
func_name,
"--set",
f"icon={icon_path.absolute()}",
f"category={category}",
f"tags={tags}",
cwd=temp_dir,
)
with open(manifest_file, "r") as f:
content = f.read()
assert mock_manifest_content in content
assert f"include {package_name}/icons" in content
# Add a tool script with icon
tool_script_name = "tool_func_with_icon"
run_pf_command(
"tool",
"init",
"--tool",
tool_script_name,
"--set",
f"icon={icon_path.absolute()}",
f"category={category}",
f"tags={tags}",
cwd=Path(temp_dir) / package_name / package_name,
)
sys.path.append(str(package_folder.absolute()))
spec = importlib.util.spec_from_file_location(
f"{package_name}.utils", package_folder / package_name / "utils.py"
)
utils = importlib.util.module_from_spec(spec)
spec.loader.exec_module(utils)
assert hasattr(utils, "list_package_tools")
tools_meta = utils.list_package_tools()
meta = tools_meta[f"{package_name}.{func_name}.{func_name}"]
assert meta["category"] == category
assert meta["tags"] == tags
assert meta["icon"].startswith("data:image")
assert tools_meta[f"{package_name}.{tool_script_name}.{tool_script_name}"]["icon"].startswith("data:image")
# icon doesn't exist
with pytest.raises(SystemExit):
run_pf_command(
"tool",
"init",
"--package",
package_name,
"--tool",
func_name,
"--set",
"icon=invalid_icon_path",
cwd=temp_dir,
)
outerr = capsys.readouterr()
assert "Cannot find the icon path" in outerr.out
def test_tool_list(self, capsys):
# List package tools in environment
run_pf_command("tool", "list")
outerr = capsys.readouterr()
tools_dict = json.loads(outerr.out)
package_tool_name = "promptflow.tools.embedding.embedding"
assert package_tool_name in tools_dict["package"]
# List flow tools and package tools
run_pf_command("tool", "list", "--flow", f"{FLOWS_DIR}/chat_flow")
outerr = capsys.readouterr()
tools_dict = json.loads(outerr.out)
expect_flow_tools = {
"chat.jinja2": {
"type": "llm",
"inputs": {"chat_history": {"type": ["string"]}, "question": {"type": ["string"]}},
"source": "chat.jinja2",
},
"show_answer.py": {
"type": "python",
"inputs": {"chat_answer": {"type": ["string"]}},
"source": "show_answer.py",
"function": "show_answer",
},
}
assert tools_dict["code"] == expect_flow_tools
assert package_tool_name in tools_dict["package"]
# Invalid flow parameter
with pytest.raises(SystemExit):
run_pf_command("tool", "list", "--flow", "invalid_flow_folder")
outerr = capsys.readouterr()
assert "invalid_flow_folder does not exist" in outerr.out
def test_tool_validate(self):
# Test validate tool script
tool_script_path = Path(TOOL_ROOT) / "custom_llm_tool.py"
run_pf_command("tool", "validate", "--source", str(tool_script_path))
invalid_tool_script_path = Path(TOOL_ROOT) / "invalid_tool.py"
with pytest.raises(SystemExit):
run_pf_command("tool", "validate", "--source", str(invalid_tool_script_path))
# Test validate package tool
tool_script_path = Path(TOOL_ROOT) / "tool_package"
sys.path.append(str(tool_script_path.resolve()))
with patch("promptflow._sdk.operations._tool_operations.ToolOperations._is_package_tool", return_value=True):
with pytest.raises(SystemExit):
run_pf_command("tool", "validate", "--source", "tool_package")
# Test validate tool in package
with pytest.raises(SystemExit):
run_pf_command("tool", "validate", "--source", "tool_package.invalid_tool.invalid_input_settings")
def test_flow_test_with_image_input_and_output(self):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/python_tool_with_simple_image",
)
output_path = Path(FLOWS_DIR) / "python_tool_with_simple_image" / ".promptflow" / "output"
assert output_path.exists()
image_path = Path(FLOWS_DIR) / "python_tool_with_simple_image" / ".promptflow" / "intermediate"
assert image_path.exists()
def test_flow_test_with_composite_image(self):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/python_tool_with_composite_image",
)
output_path = Path(FLOWS_DIR) / "python_tool_with_composite_image" / ".promptflow" / "output"
assert output_path.exists()
image_path = Path(FLOWS_DIR) / "python_tool_with_composite_image" / ".promptflow" / "intermediate"
assert image_path.exists()
def test_run_file_with_set(self, pf) -> None:
name = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--file",
f"{RUNS_DIR}/run_with_env.yaml",
"--set",
f"name={name}",
)
# run exists
pf.runs.get(name=name)
def test_run_file_with_set_priority(self, pf) -> None:
# --name has higher priority than --set
name1 = str(uuid.uuid4())
name2 = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--file",
f"{RUNS_DIR}/run_with_env.yaml",
"--set",
f"name={name1}",
"--name",
name2,
)
# run exists
try:
pf.runs.get(name=name1)
except RunNotFoundError:
pass
pf.runs.get(name=name2)
def test_data_scrubbing(self):
# Prepare connection
run_pf_command(
"connection", "create", "--file", f"{CONNECTIONS_DIR}/custom_connection.yaml", "--name", "custom_connection"
)
# Test flow run
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/print_secret_flow",
)
output_path = Path(FLOWS_DIR) / "print_secret_flow" / ".promptflow" / "flow.output.json"
assert output_path.exists()
log_path = Path(FLOWS_DIR) / "print_secret_flow" / ".promptflow" / "flow.log"
with open(log_path, "r") as f:
log_content = f.read()
assert "**data_scrubbed**" in log_content
# Test node run
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/print_secret_flow",
"--node",
"print_secret",
"--inputs",
"conn=custom_connection",
"inputs.topic=atom",
)
output_path = Path(FLOWS_DIR) / "print_secret_flow" / ".promptflow" / "flow-print_secret.node.detail.json"
assert output_path.exists()
log_path = Path(FLOWS_DIR) / "print_secret_flow" / ".promptflow" / "print_secret.node.log"
with open(log_path, "r") as f:
log_content = f.read()
assert "**data_scrubbed**" in log_content
def test_cli_ua(self, pf):
# clear user agent before test
context = OperationContext().get_instance()
context.user_agent = ""
with environment_variable_overwrite(PF_USER_AGENT, ""):
with pytest.raises(SystemExit):
run_pf_command(
"run",
"show",
"--name",
"not_exist",
)
user_agent = ClientUserAgentUtil.get_user_agent()
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk", "promptflow-cli"}
def test_config_set_pure_flow_directory_macro(self, capfd: pytest.CaptureFixture) -> None:
run_pf_command(
"config",
"set",
"run.output_path='${flow_directory}'",
)
out, _ = capfd.readouterr()
expected_error_message = (
"Invalid config value '${flow_directory}' for 'run.output_path': "
"Cannot specify flow directory as run output path; "
"if you want to specify run output path under flow directory, "
"please use its child folder, e.g. '${flow_directory}/.runs'."
)
assert expected_error_message in out
from promptflow._sdk._configuration import Configuration
config = Configuration.get_instance()
assert config.get_run_output_path() is None
def test_user_agent_in_cli(self):
context = OperationContext().get_instance()
context.user_agent = ""
with pytest.raises(SystemExit):
run_pf_command(
"run",
"show",
"--name",
"not_exist",
"--user-agent",
"a/1.0.0 b/2.0",
)
user_agent = ClientUserAgentUtil.get_user_agent()
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk", "promptflow-cli", "a", "b"}
context.user_agent = ""
def test_node_run_telemetry(self, local_client):
from promptflow._sdk._telemetry.logging_handler import PromptFlowSDKLogHandler
def assert_node_run(*args, **kwargs):
record = args[0]
assert record.msg.startswith("pf.flow.node_test") or record.msg.startswith("pf.flows.node_test")
assert record.custom_dimensions["activity_name"] in ["pf.flow.node_test", "pf.flows.node_test"]
def assert_flow_test(*args, **kwargs):
record = args[0]
assert record.msg.startswith("pf.flow.test") or record.msg.startswith("pf.flows.test")
assert record.custom_dimensions["activity_name"] in ["pf.flow.test", "pf.flows.test"]
with tempfile.TemporaryDirectory() as temp_dir:
shutil.copytree((Path(FLOWS_DIR) / "print_env_var").resolve().as_posix(), temp_dir, dirs_exist_ok=True)
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = assert_node_run
run_pf_command(
"flow",
"test",
"--flow",
temp_dir,
"--inputs",
"key=API_BASE",
"--node",
"print_env",
)
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = assert_flow_test
run_pf_command(
"flow",
"test",
"--flow",
temp_dir,
"--inputs",
"key=API_BASE",
)
def test_run_create_with_existing_run_folder(self):
run_name = "web_classification_variant_0_20231205_120253_104100"
# clean the run if exists
from promptflow import PFClient
from promptflow._cli._utils import _try_delete_existing_run_record
pf = PFClient()
_try_delete_existing_run_record(run_name)
# assert the run doesn't exist
with pytest.raises(RunNotFoundError):
pf.runs.get(run_name)
uuid_str = str(uuid.uuid4())
run_folder = Path(RUNS_DIR) / run_name
run_pf_command(
"run",
"create",
"--source",
Path(run_folder).resolve().as_posix(),
"--set",
f"display_name={uuid_str}",
f"description={uuid_str}",
f"tags.tag1={uuid_str}",
)
# check run results
run = pf.runs.get(run_name)
assert run.display_name == uuid_str
assert run.description == uuid_str
assert run.tags["tag1"] == uuid_str
def test_cli_command_no_sub_command(self, capfd):
try:
run_pf_command(
"run",
)
# argparse will return SystemExit after running --help
except SystemExit:
pass
# will run pf run -h
out, _ = capfd.readouterr()
assert "A CLI tool to manage runs for prompt flow." in out
try:
run_pf_command("run", "-h")
# argparse will return SystemExit after running --help
except SystemExit:
pass
# will run pf run -h
out, _ = capfd.readouterr()
assert "A CLI tool to manage runs for prompt flow." in out
def test_unknown_command(self, capfd):
try:
run_pf_command(
"unknown",
)
# argparse will return SystemExit after running --help
except SystemExit:
pass
_, err = capfd.readouterr()
assert "invalid choice" in err
def test_config_set_user_agent(self) -> None:
run_pf_command(
"config",
"set",
"user_agent=test/1.0.0",
)
user_agent = setup_user_agent_to_operation_context(None)
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk", "promptflow-cli", "PFCustomer_test"}
# clear user agent
run_pf_command(
"config",
"set",
"user_agent=",
)
context = OperationContext().get_instance()
context.user_agent = ""
def test_basic_flow_run_delete(self, monkeypatch, local_client, capfd) -> None:
input_list = ["y"]
def mock_input(*args, **kwargs):
if input_list:
return input_list.pop()
else:
raise KeyboardInterrupt()
monkeypatch.setattr("builtins.input", mock_input)
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--name",
run_id,
"--flow",
f"{FLOWS_DIR}/print_env_var",
"--data",
f"{DATAS_DIR}/env_var_names.jsonl",
)
out, _ = capfd.readouterr()
assert "Completed" in out
run_a = local_client.runs.get(name=run_id)
local_storage = LocalStorageOperations(run_a)
path_a = local_storage.path
assert os.path.exists(path_a)
# delete the run
run_pf_command(
"run",
"delete",
"--name",
f"{run_id}",
)
# both runs are deleted and their folders are deleted
assert not os.path.exists(path_a)
def test_basic_flow_run_delete_no_confirm(self, monkeypatch, local_client, capfd) -> None:
run_id = str(uuid.uuid4())
run_pf_command(
"run",
"create",
"--name",
run_id,
"--flow",
f"{FLOWS_DIR}/print_env_var",
"--data",
f"{DATAS_DIR}/env_var_names.jsonl",
)
out, _ = capfd.readouterr()
assert "Completed" in out
run_a = local_client.runs.get(name=run_id)
local_storage = LocalStorageOperations(run_a)
path_a = local_storage.path
assert os.path.exists(path_a)
# delete the run
run_pf_command("run", "delete", "--name", f"{run_id}", "-y")
# both runs are deleted and their folders are deleted
assert not os.path.exists(path_a)
def test_basic_flow_run_delete_error(self, monkeypatch) -> None:
input_list = ["y"]
def mock_input(*args, **kwargs):
if input_list:
return input_list.pop()
else:
raise KeyboardInterrupt()
monkeypatch.setattr("builtins.input", mock_input)
run_id = str(uuid.uuid4())
# delete the run
with pytest.raises(SystemExit):
run_pf_command(
"run",
"delete",
"--name",
f"{run_id}",
)
def test_experiment_hide_by_default(self, monkeypatch, capfd):
# experiment will be hide if no config set
with pytest.raises(SystemExit):
run_pf_command(
"experiment",
"create",
"--template",
f"{EXPERIMENT_DIR}/basic-no-script-template/basic.exp.yaml",
)
@pytest.mark.usefixtures("setup_experiment_table")
def test_experiment_start(self, monkeypatch, capfd, local_client):
with mock.patch("promptflow._sdk._configuration.Configuration.is_internal_features_enabled") as mock_func:
mock_func.return_value = True
exp_name = str(uuid.uuid4())
run_pf_command(
"experiment",
"create",
"--template",
f"{EXPERIMENT_DIR}/basic-script-template/basic-script.exp.yaml",
"--name",
exp_name,
)
out, _ = capfd.readouterr()
assert exp_name in out
assert ExperimentStatus.NOT_STARTED in out
run_pf_command(
"experiment",
"start",
"--name",
exp_name,
)
out, _ = capfd.readouterr()
assert ExperimentStatus.TERMINATED in out
exp = local_client._experiments.get(name=exp_name)
assert len(exp.node_runs) == 4
assert all(len(exp.node_runs[node_name]) > 0 for node_name in exp.node_runs)
metrics = local_client.runs.get_metrics(name=exp.node_runs["eval"][0]["name"])
assert "accuracy" in metrics
def test_batch_run_timeout(self, local_client):
line_timeout_seconds = "54"
timout_index = 9
p = multiprocessing.Process(
target=run_batch,
args=(local_client, line_timeout_seconds, timout_index),
)
p.start()
p.join()
assert p.exitcode == 0
def test_batch_run_completed_within_the_required_time(self, local_client):
line_timeout_seconds = "600"
p = multiprocessing.Process(
target=run_batch,
args=(
local_client,
line_timeout_seconds,
),
)
p.start()
p.join()
assert p.exitcode == 0
def test_run_list(self, local_client):
from promptflow._sdk.entities import Run
with patch.object(Run, "_to_dict") as mock_to_dict:
mock_to_dict.side_effect = RuntimeError("mock exception")
run_pf_command(
"run",
"list",
)
def test_pf_flow_test_with_detail(self, tmpdir):
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"answer=Channel",
"evidence=Url",
"--detail",
Path(tmpdir).as_posix(),
)
# when specify parameter `detail`, detail, output and log will be saved in both
# the specified folder and ".promptflow" under flow folder
for parent_folder in [
Path(FLOWS_DIR) / "web_classification" / ".promptflow",
Path(tmpdir),
]:
for filename in ["flow.detail.json", "flow.output.json", "flow.log"]:
path = parent_folder / filename
assert path.is_file()
def test_pf_flow_test_single_node_with_detail(self, tmpdir):
node_name = "fetch_text_content_from_url"
run_pf_command(
"flow",
"test",
"--flow",
f"{FLOWS_DIR}/web_classification",
"--inputs",
"inputs.url="
"https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
"--node",
node_name,
"--detail",
Path(tmpdir).as_posix(),
)
output_path = Path(FLOWS_DIR) / "web_classification" / ".promptflow" / f"flow-{node_name}.node.detail.json"
assert output_path.exists()
# when specify parameter `detail`, node detail, output and log will be saved in both
# the specified folder and ".promptflow" under flow folder
for parent_folder in [
Path(FLOWS_DIR) / "web_classification" / ".promptflow",
Path(tmpdir),
]:
for filename in [
f"flow-{node_name}.node.detail.json",
f"flow-{node_name}.node.output.json",
f"{node_name}.node.log",
]:
path = parent_folder / filename
assert path.is_file()
| promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_cli.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_cli.py",
"repo_id": "promptflow",
"token_count": 40417
} | 54 |
import functools
import inspect
from promptflow._core.tool import STREAMING_OPTION_PARAMETER_ATTR, ToolType
from promptflow._core.tracer import TraceType, _create_trace_from_function_call
from .record_storage import RecordFileMissingException, RecordItemMissingException, RecordStorage
# recording array is a global variable to store the function names that need to be recorded
recording_array = ["fetch_text_content_from_url", "my_python_tool"]
def recording_array_extend(items):
global recording_array
recording_array.extend(items)
def recording_array_reset():
global recording_array
recording_array = ["fetch_text_content_from_url", "my_python_tool"]
def _prepare_input_dict(func, args, kwargs):
"""Prepare input dict for record storage"""
if func.__name__ == "partial":
func_wo_partial = func.func
else:
func_wo_partial = func
input_dict = {}
for key in kwargs:
input_dict[key] = kwargs[key]
if type(func).__name__ == "partial":
input_dict["_args"] = func.args
for key in func.keywords:
input_dict[key] = func.keywords[key]
else:
input_dict["_args"] = []
input_dict["_func"] = func_wo_partial.__qualname__
return input_dict
def _replace_tool_rule(func):
"""Replace tool with the following rules."""
global recording_array
if func.__name__ == "partial":
func_wo_partial = func.func
else:
func_wo_partial = func
if func_wo_partial.__qualname__.startswith("AzureOpenAI"):
return True
elif func_wo_partial.__qualname__.startswith("OpenAI"):
return True
elif func_wo_partial.__module__ == "promptflow.tools.aoai":
return True
elif func_wo_partial.__module__ == "promptflow.tools.openai_gpt4v":
return True
elif func_wo_partial.__module__ == "promptflow.tools.openai":
return True
elif func_wo_partial.__qualname__ in recording_array:
return True
else:
return False
def call_func(func, args, kwargs):
input_dict = _prepare_input_dict(func, args, kwargs)
if RecordStorage.is_replaying_mode():
return RecordStorage.get_instance().get_record(input_dict)
# Record mode will record item to record file
elif RecordStorage.is_recording_mode():
try:
# prevent recording the same item twice
obj = RecordStorage.get_instance().get_record(input_dict)
except (RecordItemMissingException, RecordFileMissingException):
# recording the item
obj = RecordStorage.get_instance().set_record(input_dict, func(*args, **kwargs))
return obj
async def call_func_async(func, args, kwargs):
input_dict = _prepare_input_dict(func, args, kwargs)
if RecordStorage.is_replaying_mode():
return RecordStorage.get_instance().get_record(input_dict)
# Record mode will record item to record file
elif RecordStorage.is_recording_mode():
try:
# prevent recording the same item twice
obj = RecordStorage.get_instance().get_record(input_dict)
except (RecordItemMissingException, RecordFileMissingException):
# recording the item
obj = RecordStorage.get_instance().set_record(input_dict, await func(*args, **kwargs))
return obj
def mock_tool(original_tool):
"""
Basically this is the original tool decorator.
The key modification is, at every func(*args, **argv) call. There is a surrounding record/replay logic:
if replay:
return replay:
elif record:
if recorded:
return recorded
call func(*args, **argv) and record the result
Actually it needn't to be such a long function, but tool decorator should not trigger a long stack trace.
"""
def tool(
func=None,
*args_mock,
name: str = None,
description: str = None,
type: str = None,
input_settings=None,
streaming_option_parameter=None,
**kwargs_mock,
):
def tool_decorator(func):
from promptflow.exceptions import UserErrorException
def create_trace(func, args, kwargs):
return _create_trace_from_function_call(func, args=args, kwargs=kwargs, trace_type=TraceType.TOOL)
if inspect.iscoroutinefunction(func):
@functools.wraps(func)
async def decorated_tool(*args, **kwargs):
from promptflow._core.tracer import Tracer
if Tracer.active_instance() is None:
return await call_func_async(func, args, kwargs)
try:
Tracer.push(create_trace(func, args, kwargs))
output = await call_func_async(func, args, kwargs)
return Tracer.pop(output)
except Exception as e:
Tracer.pop(None, e)
raise
new_f = decorated_tool
else:
@functools.wraps(func)
def decorated_tool(*args, **kwargs):
from promptflow._core.tracer import Tracer
if Tracer.active_instance() is None:
return call_func(func, args, kwargs)
try:
Tracer.push(create_trace(func, args, kwargs))
output = call_func(func, args, kwargs)
return Tracer.pop(output)
except Exception as e:
Tracer.pop(None, e)
raise
new_f = decorated_tool
if type is not None and type not in [k.value for k in ToolType]:
raise UserErrorException(f"Tool type {type} is not supported yet.")
new_f.__original_function = func
new_f.__tool = None # This will be set when generating the tool definition.
new_f.__name = name
new_f.__description = description
new_f.__type = type
new_f.__input_settings = input_settings
new_f.__extra_info = kwargs_mock
if streaming_option_parameter and isinstance(streaming_option_parameter, str):
setattr(new_f, STREAMING_OPTION_PARAMETER_ATTR, streaming_option_parameter)
return new_f
# tool replacements.
if func is not None:
if not _replace_tool_rule(func):
return original_tool(
func,
*args_mock,
name=name,
description=description,
type=type,
input_settings=input_settings,
**kwargs_mock,
)
return tool_decorator(func)
return original_tool( # no recording for @tool(name="func_name")
func,
*args_mock,
name=name,
description=description,
type=type,
input_settings=input_settings,
**kwargs_mock,
)
return tool
| promptflow/src/promptflow/tests/sdk_cli_test/recording_utilities/mock_tool.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/recording_utilities/mock_tool.py",
"repo_id": "promptflow",
"token_count": 3290
} | 55 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import datetime
import importlib
import json
import os
import shutil
import sys
import tempfile
import threading
import time
from pathlib import Path
from unittest.mock import patch
import mock
import pandas as pd
import pytest
from requests import Response
from promptflow._cli._params import AppendToDictAction
from promptflow._cli._utils import (
_build_sorted_column_widths_tuple_list,
_calculate_column_widths,
list_of_dict_to_nested_dict,
)
from promptflow._constants import LAST_CHECK_TIME, PF_VERSION_CHECK
from promptflow._sdk._constants import HOME_PROMPT_FLOW_DIR, PROMPT_FLOW_HOME_DIR_ENV_VAR
from promptflow._sdk._errors import GenerateFlowToolsJsonError
from promptflow._sdk._telemetry.logging_handler import get_scrubbed_cloud_role
from promptflow._sdk._utils import (
_generate_connections_dir,
decrypt_secret_value,
encrypt_secret_value,
generate_flow_tools_json,
override_connection_config_with_environment_variable,
refresh_connections_dir,
resolve_connections_environment_variable_reference,
snake_to_camel,
)
from promptflow._utils.load_data import load_data
from promptflow._utils.retry_utils import http_retry_wrapper, retry
from promptflow._utils.version_hint_utils import check_latest_version
TEST_ROOT = Path(__file__).parent.parent.parent
CONNECTION_ROOT = TEST_ROOT / "test_configs/connections"
@pytest.mark.unittest
class TestUtils:
def test_encrypt_decrypt_value(self):
test_value = "test"
encrypted = encrypt_secret_value(test_value)
assert decrypt_secret_value("mock", encrypted) == test_value
def test_snake_to_camel(self):
assert snake_to_camel("test_snake_case") == "TestSnakeCase"
assert snake_to_camel("TestSnakeCase") == "TestSnakeCase"
def test_sqlite_retry(self, capfd) -> None:
from sqlalchemy.exc import OperationalError
from promptflow._sdk._orm.retry import sqlite_retry
@sqlite_retry
def mock_sqlite_op() -> None:
print("sqlite op...")
raise OperationalError("statement", "params", "orig")
# it will finally raise an OperationalError
with pytest.raises(OperationalError):
mock_sqlite_op()
# assert function execution time from stdout
out, _ = capfd.readouterr()
assert out.count("sqlite op...") == 3
def test_resolve_connections_environment_variable_reference(self):
connections = {
"test_connection": {
"type": "AzureOpenAIConnection",
"value": {
"api_key": "${env:AZURE_OPENAI.API_KEY}",
"api_base": "${env:AZURE_OPENAI_API_BASE}",
},
},
"test_custom_connection": {
"type": "CustomConnection",
"value": {"key": "${env:CUSTOM_KEY}", "key2": "value2"},
},
}
with mock.patch.dict(
os.environ, {"AZURE_OPENAI.API_KEY": "KEY", "AZURE_OPENAI_API_BASE": "BASE", "CUSTOM_KEY": "CUSTOM_VALUE"}
):
resolve_connections_environment_variable_reference(connections)
assert connections["test_connection"]["value"]["api_key"] == "KEY"
assert connections["test_connection"]["value"]["api_base"] == "BASE"
assert connections["test_custom_connection"]["value"]["key"] == "CUSTOM_VALUE"
# test bad cases
connections = {
"test_connection": {
"type": "AzureOpenAIConnection",
"value": {"none_value": None, "integer_value": 1, "float_value": 1.0, "dict_value": {}},
},
}
resolve_connections_environment_variable_reference(connections)
assert connections["test_connection"]["value"] == {
"none_value": None,
"integer_value": 1,
"float_value": 1.0,
"dict_value": {},
}
def test_override_connection_config_with_environment_variable(self):
connections = {
"test_connection": {
"type": "AzureOpenAIConnection",
"value": {
"api_key": "KEY",
"api_base": "https://gpt-test-eus.openai.azure.com/",
},
},
"test_custom_connection": {
"type": "CustomConnection",
"value": {"key": "value1", "key2": "value2"},
},
}
with mock.patch.dict(
os.environ, {"TEST_CONNECTION_API_BASE": "BASE", "TEST_CUSTOM_CONNECTION_KEY": "CUSTOM_VALUE"}
):
override_connection_config_with_environment_variable(connections)
assert connections["test_connection"]["value"]["api_key"] == "KEY"
assert connections["test_connection"]["value"]["api_base"] == "BASE"
assert connections["test_custom_connection"]["value"]["key"] == "CUSTOM_VALUE"
assert connections["test_custom_connection"]["value"]["key2"] == "value2"
def test_generate_flow_tools_json(self) -> None:
# call twice to ensure system path won't be affected during generation
for _ in range(2):
flow_src_path = "./tests/test_configs/flows/flow_with_sys_inject"
with tempfile.TemporaryDirectory() as temp_dir:
flow_dst_path = os.path.join(temp_dir, "flow_with_sys_inject")
shutil.copytree(flow_src_path, flow_dst_path)
flow_tools_json = generate_flow_tools_json(flow_dst_path, dump=False)
groundtruth = {
"hello.py": {
"type": "python",
"inputs": {
"input1": {
"type": [
"string",
],
},
},
"source": "hello.py",
"function": "my_python_tool",
}
}
assert flow_tools_json["code"] == groundtruth
def test_generate_flow_tools_json_expecting_fail(self) -> None:
flow_path = "./tests/test_configs/flows/flow_with_invalid_import"
with pytest.raises(GenerateFlowToolsJsonError) as e:
generate_flow_tools_json(flow_path, dump=False)
assert "Generate meta failed, detail error(s):" in str(e.value)
# raise_error = False
flow_tools_json = generate_flow_tools_json(flow_path, dump=False, raise_error=False)
assert len(flow_tools_json["code"]) == 0
@pytest.mark.parametrize(
"python_path, env_hash",
[
("D:\\Tools\\Anaconda3\\envs\\pf\\python.exe", ("a9620c3cdb7ccf3ec9f4005e5b19c12d1e1fef80")),
("/Users/fake_user/anaconda3/envs/pf/bin/python3.10", ("e3f33eadd9be376014eb75a688930930ca83c056")),
],
)
def test_generate_connections_dir(self, python_path, env_hash):
expected_result = (HOME_PROMPT_FLOW_DIR / "envs" / env_hash / "connections").resolve()
with patch.object(sys, "executable", python_path):
result = _generate_connections_dir()
assert result == expected_result
def test_refresh_connections_dir(self):
from promptflow._core.tools_manager import collect_package_tools_and_connections
tools, specs, templates = collect_package_tools_and_connections()
refresh_connections_dir(specs, templates)
conn_dir = _generate_connections_dir()
assert len(os.listdir(conn_dir)) > 0, "No files were generated"
@pytest.mark.parametrize("concurrent_count", [1, 2, 4, 8])
def test_concurrent_execution_of_refresh_connections_dir(self, concurrent_count):
threads = []
# Create and start threads
for _ in range(concurrent_count):
thread = threading.Thread(
target=lambda: refresh_connections_dir(connection_spec_files=[], connection_template_yamls=[])
)
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
def test_concurrent_hint_for_update(self):
def mock_check_latest_version():
time.sleep(5)
check_latest_version()
with patch("promptflow._utils.version_hint_utils.datetime") as mock_datetime, patch(
"promptflow._utils.version_hint_utils.check_latest_version", side_effect=mock_check_latest_version
):
from promptflow._sdk._telemetry import monitor_operation
class HintForUpdate:
@monitor_operation(activity_name="pf.flows.test")
def hint_func(self):
return
current_time = datetime.datetime.now()
mock_datetime.datetime.now.return_value = current_time
mock_datetime.datetime.strptime.return_value = current_time - datetime.timedelta(days=8)
mock_datetime.timedelta.return_value = datetime.timedelta(days=7)
HintForUpdate().hint_func()
assert Path(HOME_PROMPT_FLOW_DIR / PF_VERSION_CHECK).exists()
with open(HOME_PROMPT_FLOW_DIR / PF_VERSION_CHECK, "r") as f:
cached_versions = json.load(f)
# since mock_check_latest_version is a demon thread, it will exit when main thread complete, so
# LAST_CHECK_TIME won't be updated since sleep 5s
assert LAST_CHECK_TIME not in cached_versions or cached_versions[LAST_CHECK_TIME] != str(current_time)
@pytest.mark.parametrize(
"data_path",
[
"./tests/test_configs/datas/load_data_cases/colors.csv",
"./tests/test_configs/datas/load_data_cases/colors.json",
"./tests/test_configs/datas/load_data_cases/colors.jsonl",
"./tests/test_configs/datas/load_data_cases/colors.tsv",
"./tests/test_configs/datas/load_data_cases/colors.parquet",
],
)
def test_load_data(self, data_path: str) -> None:
# for csv and tsv format, all columns will be string;
# for rest, integer will be int and float will be float
is_string = "csv" in data_path or "tsv" in data_path
df = load_data(data_path)
assert len(df) == 3
assert df[0]["name"] == "Red"
assert isinstance(df[0]["id_text"], str)
assert df[0]["id_text"] == "1.0"
if is_string:
assert isinstance(df[0]["id_int"], str)
assert df[0]["id_int"] == "1"
assert isinstance(df[0]["id_float"], str)
assert df[0]["id_float"] == "1.0"
else:
assert isinstance(df[0]["id_int"], int)
assert df[0]["id_int"] == 1
assert isinstance(df[0]["id_float"], float)
assert df[0]["id_float"] == 1.0
@pytest.mark.parametrize(
"data_path",
[
"./tests/test_configs/datas/load_data_cases/10k.jsonl",
"./tests/test_configs/datas/load_data_cases/10k",
],
)
def test_load_10k_data(self, data_path: str) -> None:
df = load_data(data_path)
assert len(df) == 10000
# specify max_rows_count
max_rows_count = 5000
head_rows = load_data(data_path, max_rows_count=max_rows_count)
assert len(head_rows) == max_rows_count
assert head_rows == df[:max_rows_count]
@pytest.mark.parametrize(
"script_name, expected_result",
[
("pfs", "pfs"),
("pfutil.py", "pfutil.py"),
("pf", "pf"),
("pfazure", "pfazure"),
("pf.exe", "pf.exe"),
("pfazure.exe", "pfazure.exe"),
("app.py", "app.py"),
("python -m unittest", "python -m unittest"),
("pytest", "pytest"),
("gunicorn", "gunicorn"),
("ipykernel_launcher.py", "ipykernel_launcher.py"),
("jupyter-notebook", "jupyter-notebook"),
("jupyter-lab", "jupyter-lab"),
("python", "python"),
("Unknown Application", "Unknown Application"),
("unknown_script.py", "***.py"),
("path/to/unknown_script.py", "***.py"),
(r"path\to\unknown_script.py", "***.py"),
('invalid_chars_\\/:*?"<>|', "***"),
],
)
def test_get_scrubbed_cloud_role(self, script_name, expected_result):
with mock.patch("sys.argv", [script_name]):
assert get_scrubbed_cloud_role() == expected_result
def test_configure_pf_home_dir(self, tmpdir) -> None:
from promptflow._sdk import _constants
custom_pf_home_dir_path = Path(tmpdir / ".promptflow").resolve()
assert not custom_pf_home_dir_path.exists()
with patch.dict(os.environ, {PROMPT_FLOW_HOME_DIR_ENV_VAR: custom_pf_home_dir_path.as_posix()}):
importlib.reload(_constants)
assert _constants.HOME_PROMPT_FLOW_DIR.as_posix() == custom_pf_home_dir_path.as_posix()
assert _constants.HOME_PROMPT_FLOW_DIR.is_dir()
importlib.reload(_constants)
def test_configure_pf_home_dir_with_invalid_path(self) -> None:
from promptflow._sdk import _constants
invalid_path = "/invalid:path"
with patch.dict(os.environ, {PROMPT_FLOW_HOME_DIR_ENV_VAR: invalid_path}):
assert os.getenv(PROMPT_FLOW_HOME_DIR_ENV_VAR) == invalid_path
importlib.reload(_constants)
assert _constants.HOME_PROMPT_FLOW_DIR.as_posix() == (Path.home() / ".promptflow").resolve().as_posix()
importlib.reload(_constants)
@pytest.mark.unittest
class TestCLIUtils:
def test_list_of_dict_to_nested_dict(self):
test_list = [{"node1.connection": "a"}, {"node2.deploy_name": "b"}]
result = list_of_dict_to_nested_dict(test_list)
assert result == {"node1": {"connection": "a"}, "node2": {"deploy_name": "b"}}
test_list = [{"node1.connection": "a"}, {"node1.deploy_name": "b"}]
result = list_of_dict_to_nested_dict(test_list)
assert result == {"node1": {"connection": "a", "deploy_name": "b"}}
def test_append_to_dict_action(self):
parser = argparse.ArgumentParser(prog="test_dict_action")
parser.add_argument("--dict", action=AppendToDictAction, nargs="+")
args = ["--dict", "key1=val1", "'key2=val2'", '"key3=val3"', "key4='val4'", "key5=\"val5'"]
args = parser.parse_args(args)
expect_dict = {
"key1": "val1",
"key2": "val2",
"key3": "val3",
"key4": "val4",
"key5": "\"val5'",
}
assert args.dict[0] == expect_dict
def test_build_sorted_column_widths_tuple_list(self) -> None:
columns = ["col1", "col2", "col3"]
values1 = {"col1": 1, "col2": 4, "col3": 3}
values2 = {"col1": 3, "col2": 3, "col3": 1}
margins = {"col1": 1, "col2": 2, "col3": 2}
# sort by (max(values1, values2) + margins)
res = _build_sorted_column_widths_tuple_list(columns, values1, values2, margins)
assert res == [("col2", 6), ("col3", 5), ("col1", 4)]
def test_calculate_column_widths(self) -> None:
data = [
{
"inputs.url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g",
"inputs.answer": "Channel",
"inputs.evidence": "Url",
"outputs.category": "Channel",
"outputs.evidence": "URL",
},
{
"inputs.url": "https://arxiv.org/abs/2307.04767",
"inputs.answer": "Academic",
"inputs.evidence": "Text content",
"outputs.category": "Academic",
"outputs.evidence": "Text content",
},
{
"inputs.url": "https://play.google.com/store/apps/details?id=com.twitter.android",
"inputs.answer": "App",
"inputs.evidence": "Both",
"outputs.category": "App",
"outputs.evidence": "Both",
},
]
df = pd.DataFrame(data)
terminal_width = 120
res = _calculate_column_widths(df, terminal_width)
assert res == [4, 23, 13, 15, 15, 15]
def test_calculate_column_widths_edge_case(self) -> None:
nan = float("nan")
# test case comes from examples/flow/evaluation/eval-qna-non-rag
data = [
{
"inputs.groundtruth": "The Alpine Explorer Tent has the highest rainfly waterproof rating at 3000m",
"inputs.answer": "There are various tents available in the market that offer different levels of waterproofing. However, one tent that is often highly regarded for its waterproofing capabilities is the MSR Hubba Hubba NX tent. It features a durable rainfly and a bathtub-style floor construction, both of which contribute to its excellent water resistance. It is always recommended to read product specifications and customer reviews to ensure you find a tent that meets your specific waterproofing requirements.", # noqa: E501
"inputs.context": "{${data.context}}",
"inputs.question": "Which tent is the most waterproof?",
"inputs.metrics": "gpt_groundedness,f1_score",
"inputs.line_number": 0,
"inputs.ground_truth": "The Alpine Explorer Tent has the highest rainfly waterproof rating at 3000m",
"outputs.line_number": 0,
"outputs.ada_similarity": nan,
"outputs.f1_score": 0.049999999999999996,
"outputs.gpt_coherence": nan,
"outputs.gpt_fluency": nan,
"outputs.gpt_groundedness": 3.0,
"outputs.gpt_relevance": nan,
"outputs.gpt_similarity": nan,
},
{
"inputs.groundtruth": "The Adventure Dining Table has a higher weight capacity than all of the other camping tables mentioned", # noqa: E501
"inputs.answer": "There are various camping tables available that can hold different amounts of weight. Some heavy-duty camping tables can hold up to 300 pounds or more, while others may have lower weight capacities. It's important to check the specifications of each table before purchasing to ensure it can support the weight you require.", # noqa: E501
"inputs.context": "{${data.context}}",
"inputs.question": "Which tent is the most waterproof?",
"inputs.metrics": "gpt_groundedness,f1_score",
"inputs.ground_truth": "The Alpine Explorer Tent has the highest rainfly waterproof rating at 3000m",
"outputs.line_number": 1,
"outputs.ada_similarity": nan,
"outputs.f1_score": 0.0,
"outputs.gpt_coherence": nan,
"outputs.gpt_fluency": nan,
"outputs.gpt_groundedness": 3.0,
"outputs.gpt_relevance": nan,
"outputs.gpt_similarity": nan,
},
]
df = pd.DataFrame(data)
terminal_width = 74 # GitHub Actions scenario
res = _calculate_column_widths(df, terminal_width)
# the column width should at least 1 to avoid tabulate error
assert res == [4, 1, 13, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
@pytest.mark.unittest
class TestRetryUtils:
def test_retry(self):
counter = 0
class A:
def mock_f(self):
return 1
class B(A):
@retry(Exception, tries=2, delay=1, backoff=1)
def mock_f(self):
nonlocal counter
counter += 1
raise Exception("mock exception")
with pytest.raises(Exception):
B().mock_f()
assert counter == 2
def test_http_retry(self):
counter = 0
def mock_http_request():
nonlocal counter
counter += 1
resp = Response()
resp.status_code = 429
return resp
http_retry_wrapper(mock_http_request, tries=2, delay=1, backoff=1)()
assert counter == 2
| promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_utils.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_utils.py",
"repo_id": "promptflow",
"token_count": 9612
} | 56 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: my_azure_open_ai_connection
type: azure_open_ai # snake case
api_key: "<to-be-replaced>"
api_base: "aoai-api-endpoint"
api_type: "azure"
api_version: "2023-07-01-preview"
| promptflow/src/promptflow/tests/test_configs/connections/azure_openai_connection.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/connections/azure_openai_connection.yaml",
"repo_id": "promptflow",
"token_count": 117
} | 57 |
{"key": "API_BASE", "extra_key": "EXTRA_VALUE"}
| promptflow/src/promptflow/tests/test_configs/datas/env_var_names.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/datas/env_var_names.jsonl",
"repo_id": "promptflow",
"token_count": 21
} | 58 |
{"value": 0}
{"value": 1}
{"value": 2}
{"value": 3}
{"value": 4}
{"value": 5}
{"value": 6}
{"value": 7}
{"value": 8}
{"value": 9}
{"value": 10}
{"value": 11}
{"value": 12}
{"value": 13}
{"value": 14}
{"value": 15}
{"value": 16}
{"value": 17}
{"value": 18}
{"value": 19}
| promptflow/src/promptflow/tests/test_configs/datas/numbers.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/datas/numbers.jsonl",
"repo_id": "promptflow",
"token_count": 120
} | 59 |
path: ./entry.py
entry: my_flow | promptflow/src/promptflow/tests/test_configs/eager_flows/simple_with_yaml/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/eager_flows/simple_with_yaml/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 12
} | 60 |
from promptflow import tool
@tool
def my_python_tool():
print("Avtivate")
return 'Executing...'
| promptflow/src/promptflow/tests/test_configs/flows/activate_with_no_inputs/node_b.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/activate_with_no_inputs/node_b.py",
"repo_id": "promptflow",
"token_count": 39
} | 61 |
model: gpt-4-1106-preview
instructions: You are a helpful assistant.
tools:
- type: code_interpreter
- type: function
source:
type: code
path: get_stock_eod_price.py
tool_type: python
| promptflow/src/promptflow/tests/test_configs/flows/assistant-with-file/assistant_definition.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/assistant-with-file/assistant_definition.yaml",
"repo_id": "promptflow",
"token_count": 85
} | 62 |
import json
import os
import openai
from openai.version import VERSION as OPENAI_VERSION
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection
from promptflow.tools.common import render_jinja_template, parse_chat
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
def get_client(connection: AzureOpenAIConnection):
api_key = connection.api_key
conn = dict(
api_key=connection.api_key,
)
if api_key.startswith("sk-"):
from openai import OpenAI as Client
else:
from openai import AzureOpenAI as Client
conn.update(
azure_endpoint=connection.api_base,
api_version=connection.api_version,
)
return Client(**conn)
def to_bool(value) -> bool:
return str(value).lower() == "true"
@tool
def my_python_tool(
prompt: str,
# for AOAI, deployment name is customized by user, not model name.
deployment_name: str,
suffix: str = None,
max_tokens: int = 120,
temperature: float = 1.0,
top_p: float = 1.0,
n: int = 1,
logprobs: int = None,
echo: bool = False,
stop: list = None,
presence_penalty: float = 0,
frequency_penalty: float = 0,
best_of: int = 1,
logit_bias: dict = {},
user: str = "",
connection: AzureOpenAIConnection = None,
**kwargs,
) -> str:
# TODO: remove below type conversion after client can pass json rather than string.
echo = to_bool(echo)
# Assert environment variable resolved
assert os.environ["API_TYPE"] == connection.api_type
if OPENAI_VERSION.startswith("0."):
response = openai.Completion.create(
prompt=prompt,
engine=deployment_name,
# empty string suffix should be treated as None.
suffix=suffix if suffix else None,
max_tokens=int(max_tokens),
temperature=float(temperature),
top_p=float(top_p),
n=int(n),
logprobs=int(logprobs) if logprobs else None,
echo=echo,
# fix bug "[] is not valid under any of the given schemas-'stop'"
stop=stop if stop else None,
presence_penalty=float(presence_penalty),
frequency_penalty=float(frequency_penalty),
best_of=int(best_of),
# Logit bias must be a dict if we passed it to openai api.
logit_bias=logit_bias if logit_bias else {},
user=user,
request_timeout=30,
**dict(connection),
)
return response.choices[0].text
else:
chat_str = render_jinja_template(prompt, trim_blocks=True, keep_trailing_newline=True, **kwargs)
messages = parse_chat(chat_str)
response = get_client(connection).chat.completions.create(
messages=messages,
model=deployment_name,
max_tokens=int(max_tokens),
temperature=float(temperature),
top_p=float(top_p),
n=int(n),
# fix bug "[] is not valid under any of the given schemas-'stop'"
stop=stop if stop else None,
presence_penalty=float(presence_penalty),
frequency_penalty=float(frequency_penalty),
# Logit bias must be a dict if we passed it to openai api.
logit_bias=logit_bias if logit_bias else {},
user=user
)
return response.choices[0].message.content
| promptflow/src/promptflow/tests/test_configs/flows/basic-with-connection/hello.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/basic-with-connection/hello.py",
"repo_id": "promptflow",
"token_count": 1552
} | 63 |
system:
You are a helpful assistant.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | promptflow/src/promptflow/tests/test_configs/flows/chat_flow/chat.jinja2/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/chat_flow/chat.jinja2",
"repo_id": "promptflow",
"token_count": 61
} | 64 |
[
{
"line_number": 0,
"variant_id": "variant_0",
"groundtruth": "App",
"prediction": "App"
},
{
"line_number": 1,
"variant_id": "variant_0",
"groundtruth": "Pdf",
"prediction": "PDF"
},
{
"line_number": 2,
"variant_id": "variant_0",
"groundtruth": "App",
"prediction": "Pdf"
}
]
| promptflow/src/promptflow/tests/test_configs/flows/classification_accuracy_evaluation/samples.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/classification_accuracy_evaluation/samples.json",
"repo_id": "promptflow",
"token_count": 171
} | 65 |
from promptflow import tool
@tool
def retriever_summary(summary) -> str:
print(f"Summary: {summary}")
return "Execute incident info extractor"
| promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/retriever_summary.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/retriever_summary.py",
"repo_id": "promptflow",
"token_count": 49
} | 66 |
import os
from promptflow import tool
from promptflow.connections import CustomConnection
@tool
def get_env_var(key: str, connection: CustomConnection):
# get from env var
return {"value": os.environ.get(key)}
| promptflow/src/promptflow/tests/test_configs/flows/custom_connection_flow/print_env.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/custom_connection_flow/print_env.py",
"repo_id": "promptflow",
"token_count": 67
} | 67 |
{"image": {"data:image/png;path":"logo.jpg"}}
{"image": {"data:image/png;path":"logo_2.png"}} | promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/inputs.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/inputs.jsonl",
"repo_id": "promptflow",
"token_count": 39
} | 68 |
inputs:
text:
type: string
default: Hello!
outputs:
out:
type: string
reference: ${my_first_tool.output}
nodes:
- name: my_first_tool
type: python
source:
type: package
tool: my_tool_package.tools.my_tool_1.my_tool
inputs:
connection: custom_connection_3
input_text: ${inputs.text}
| promptflow/src/promptflow/tests/test_configs/flows/flow_with_package_tool_with_custom_connection/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_package_tool_with_custom_connection/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 134
} | 69 |
from time import sleep
from promptflow import tool, trace
@trace
def is_valid_name(name):
sleep(0.5)
return len(name) > 0
@trace
def get_user_name(user_id):
sleep(0.5)
user_name = f"User {user_id}"
if not is_valid_name(user_name):
raise ValueError(f"Invalid user name: {user_name}")
return user_name
@trace
def format_greeting(user_name):
sleep(0.5)
return f"Hello, {user_name}!"
@tool
def greetings(user_id):
user_name = get_user_name(user_id)
greeting = format_greeting(user_name)
print(greeting)
return {"greeting": greeting}
| promptflow/src/promptflow/tests/test_configs/flows/flow_with_trace/greetings.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_trace/greetings.py",
"repo_id": "promptflow",
"token_count": 249
} | 70 |
{
"package": {},
"code": {
"chat_prompt": {
"type": "prompt",
"inputs": {
"customer_info": {
"type": [
"string"
]
},
"chat_history": {
"type": [
"string"
]
}
},
"source": "user_intent_zero_shot.jinja2"
},
"extract_intent_tool.py": {
"type": "python",
"inputs": {
"chat_prompt": {
"type": [
"string"
]
},
"connection": {
"type": [
"CustomConnection"
]
}
},
"function": "extract_intent_tool",
"source": "extract_intent_tool.py"
}
}
} | promptflow/src/promptflow/tests/test_configs/flows/intent-copilot/.promptflow/flow.tools.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/intent-copilot/.promptflow/flow.tools.json",
"repo_id": "promptflow",
"token_count": 695
} | 71 |
{{prompt}} | promptflow/src/promptflow/tests/test_configs/flows/llm_tool_with_duplicated_inputs/prompt_with_duplicated_inputs.jinja2/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/llm_tool_with_duplicated_inputs/prompt_with_duplicated_inputs.jinja2",
"repo_id": "promptflow",
"token_count": 4
} | 72 |
{
"__pf__.nodes.my_python_tool.completed": 3,
"__pf__.nodes.my_python_tool_with_failed_line.completed": 2,
"__pf__.nodes.my_python_tool_with_failed_line.failed": 1,
"__pf__.lines.completed": 2,
"__pf__.lines.failed": 1
} | promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/expected_status_summary.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/expected_status_summary.json",
"repo_id": "promptflow",
"token_count": 118
} | 73 |
from promptflow import tool
@tool
def my_python_tool(input: str) -> str:
yield "Echo: "
for word in input.split():
yield word + " " | promptflow/src/promptflow/tests/test_configs/flows/python_stream_tools/echo_input.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/python_stream_tools/echo_input.py",
"repo_id": "promptflow",
"token_count": 58
} | 74 |
{"image_2": {"data:image/png;path":"logo.jpg"}}
{"image_2": {"data:image/png;path":"logo_2.png"}}
{"image_2": {"data:image/png;path":"logo_2.png"}}
{"image_2": {"data:image/png;path":"logo_2.png"}}
{"image_2": {"data:image/png;path":"logo_2.png"}} | promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image_with_default/inputs.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image_with_default/inputs.jsonl",
"repo_id": "promptflow",
"token_count": 112
} | 75 |
[
{
"text": "text_1"
},
{
"text": "text_2"
},
{
"text": "text_3"
},
{
"text": "text_4"
}
] | promptflow/src/promptflow/tests/test_configs/flows/script_with___file__/samples.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/script_with___file__/samples.json",
"repo_id": "promptflow",
"token_count": 75
} | 76 |
from promptflow import tool
@tool
def echo(message: str):
"""This tool is used to echo the message back.
:param message: The message to echo.
:type message: str
"""
return message
| promptflow/src/promptflow/tests/test_configs/flows/tool_with_assistant_definition/echo.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/tool_with_assistant_definition/echo.py",
"repo_id": "promptflow",
"token_count": 71
} | 77 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.022'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.283'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceworkingdirectory
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceworkingdirectory",
"name": "workspaceworkingdirectory", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
false, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureFile", "accountName": "fake_account_name",
"fileShareName": "fake-file-share-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "None"}, "systemData": {"createdAt":
"2023-04-08T02:53:06.6001169+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.2885525+00:00",
"lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType":
"Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1161'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.129'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceworkingdirectory/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.141'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:22 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/LocalUpload?restype=directory
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ResourceAlreadyExists</Code><Message>The
specified resource already exists.\nRequestId:7fae0d6d-401a-0116-3904-450556000000\nTime:2024-01-12T03:09:23.7942671Z</Message></Error>"
headers:
content-length:
- '228'
content-type:
- application/xml
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-error-code:
- ResourceAlreadyExists
x-ms-version:
- '2023-08-03'
status:
code: 409
message: The specified resource already exists.
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:25 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users?restype=directory
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ResourceAlreadyExists</Code><Message>The
specified resource already exists.\nRequestId:6c56b75e-d01a-00ef-5804-45f879000000\nTime:2024-01-12T03:09:26.0483003Z</Message></Error>"
headers:
content-length:
- '228'
content-type:
- application/xml
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-error-code:
- ResourceAlreadyExists
x-ms-version:
- '2023-08-03'
status:
code: 409
message: The specified resource already exists.
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:26 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users%2Funknown_user?restype=directory
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ResourceAlreadyExists</Code><Message>The
specified resource already exists.\nRequestId:c96f52da-801a-00e2-4404-4530ad000000\nTime:2024-01-12T03:09:27.0734853Z</Message></Error>"
headers:
content-length:
- '228'
content-type:
- application/xml
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-error-code:
- ResourceAlreadyExists
x-ms-version:
- '2023-08-03'
status:
code: 409
message: The specified resource already exists.
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:27 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users%2Funknown_user%2Fpromptflow?restype=directory
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ResourceAlreadyExists</Code><Message>The
specified resource already exists.\nRequestId:7d1e602f-401a-00d2-8004-458e62000000\nTime:2024-01-12T03:09:28.1090476Z</Message></Error>"
headers:
content-length:
- '228'
content-type:
- application/xml
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-error-code:
- ResourceAlreadyExists
x-ms-version:
- '2023-08-03'
status:
code: 409
message: The specified resource already exists.
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:28 GMT
x-ms-version:
- '2023-08-03'
method: GET
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users%2Funknown_user%2Fpromptflow%2Fflow_name?restype=directory
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>ResourceNotFound</Code><Message>The
specified resource does not exist.\nRequestId:a23d675d-e01a-00e4-2b04-450312000000\nTime:2024-01-12T03:09:29.1076941Z</Message></Error>"
headers:
content-length:
- '223'
content-type:
- application/xml
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-error-code:
- ResourceNotFound
x-ms-version:
- '2023-08-03'
status:
code: 404
message: The specified resource does not exist.
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:29 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users%2Funknown_user%2Fpromptflow%2Fflow_name?restype=directory
response:
body:
string: ''
headers:
content-length:
- '0'
last-modified:
- Fri, 12 Jan 2024 03:09:30 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Directory
x-ms-file-change-time:
- '2024-01-12T03:09:30.2021517Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:30.2021517Z'
x-ms-file-id:
- '13835074620971024384'
x-ms-file-last-write-time:
- '2024-01-12T03:09:30.2021517Z'
x-ms-file-parent-id:
- '10088082484072808448'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:30 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- inherit
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users%2Funknown_user%2Fpromptflow%2Fflow_name%2F__pycache__?restype=directory
response:
body:
string: ''
headers:
content-length:
- '0'
last-modified:
- Fri, 12 Jan 2024 03:09:31 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Directory
x-ms-file-change-time:
- '2024-01-12T03:09:31.2276463Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:31.2276463Z'
x-ms-file-id:
- '13835144989715202048'
x-ms-file-last-write-time:
- '2024-01-12T03:09:31.2276463Z'
x-ms-file-parent-id:
- '13835074620971024384'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-content-length:
- '14'
x-ms-date:
- Fri, 12 Jan 2024 03:09:31 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- Inherit
x-ms-type:
- file
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/.gitattributes
response:
body:
string: ''
headers:
content-length:
- '0'
last-modified:
- Fri, 12 Jan 2024 03:09:32 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Archive
x-ms-file-change-time:
- '2024-01-12T03:09:32.2511509Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:32.2511509Z'
x-ms-file-id:
- '13835109805343113216'
x-ms-file-last-write-time:
- '2024-01-12T03:09:32.2511509Z'
x-ms-file-parent-id:
- '13835074620971024384'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: '* text eol=lf
'
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '14'
Content-MD5:
- nYmkCopuDuFj82431amzZw==
Content-Type:
- application/octet-stream
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:32 GMT
x-ms-range:
- bytes=0-13
x-ms-version:
- '2023-08-03'
x-ms-write:
- update
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/.gitattributes?comp=range
response:
body:
string: ''
headers:
content-length:
- '0'
content-md5:
- nYmkCopuDuFj82431amzZw==
last-modified:
- Fri, 12 Jan 2024 03:09:33 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-last-write-time:
- '2024-01-12T03:09:33.2915784Z'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-content-length:
- '250'
x-ms-date:
- Fri, 12 Jan 2024 03:09:33 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- Inherit
x-ms-type:
- file
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/flow.dag.yaml
response:
body:
string: ''
headers:
content-length:
- '0'
last-modified:
- Fri, 12 Jan 2024 03:09:34 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Archive
x-ms-file-change-time:
- '2024-01-12T03:09:34.2991527Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:34.2991527Z'
x-ms-file-id:
- '13835180174087290880'
x-ms-file-last-write-time:
- '2024-01-12T03:09:34.2991527Z'
x-ms-file-parent-id:
- '13835074620971024384'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: "inputs:\n name:\n type: string\n default: hod\noutputs:\n result:\n
\ type: string\n reference: ${hello_world.output}\nnodes:\n- name: hello_world\n
\ type: python\n source:\n type: code\n path: hello_world.py\n inputs:\n
\ name: ${inputs.name}\n"
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '250'
Content-MD5:
- CT1FTZp5JScB8fq+HjnINw==
Content-Type:
- application/octet-stream
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:34 GMT
x-ms-range:
- bytes=0-249
x-ms-version:
- '2023-08-03'
x-ms-write:
- update
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/flow.dag.yaml?comp=range
response:
body:
string: ''
headers:
content-length:
- '0'
content-md5:
- CT1FTZp5JScB8fq+HjnINw==
last-modified:
- Fri, 12 Jan 2024 03:09:35 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-last-write-time:
- '2024-01-12T03:09:35.3614846Z'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-content-length:
- '105'
x-ms-date:
- Fri, 12 Jan 2024 03:09:35 GMT
x-ms-file-attributes:
- none
x-ms-file-creation-time:
- now
x-ms-file-last-write-time:
- now
x-ms-file-permission:
- Inherit
x-ms-type:
- file
x-ms-version:
- '2023-08-03'
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/hello_world.py
response:
body:
string: ''
headers:
content-length:
- '0'
last-modified:
- Fri, 12 Jan 2024 03:09:36 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Archive
x-ms-file-change-time:
- '2024-01-12T03:09:36.3969355Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:36.3969355Z'
x-ms-file-id:
- '13835092213157068800'
x-ms-file-last-write-time:
- '2024-01-12T03:09:36.3969355Z'
x-ms-file-parent-id:
- '13835074620971024384'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: "from promptflow import tool\n\n\n@tool\ndef hello_world(name: str) -> str:\n
\ return f\"Hello World {name}!\"\n"
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '105'
Content-MD5:
- fGMkkiZAjGs8PW/AMiYppA==
Content-Type:
- application/octet-stream
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:36 GMT
x-ms-range:
- bytes=0-104
x-ms-version:
- '2023-08-03'
x-ms-write:
- update
method: PUT
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/hello_world.py?comp=range
response:
body:
string: ''
headers:
content-length:
- '0'
content-md5:
- fGMkkiZAjGs8PW/AMiYppA==
last-modified:
- Fri, 12 Jan 2024 03:09:37 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-last-write-time:
- '2024-01-12T03:09:37.4333819Z'
x-ms-request-server-encrypted:
- 'true'
x-ms-version:
- '2023-08-03'
status:
code: 201
message: Created
- request:
body: '{"flowName": "flow_display_name", "description": "test flow description",
"tags": {"owner": "sdk-test"}, "flowDefinitionFilePath": "Users/unknown_user/promptflow/flow_name/flow.dag.yaml",
"flowType": "default"}'
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '282'
Content-Type:
- application/json
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/Flows
response:
body:
string: '{"eTag": {}, "studioPortalEndpoint": "https://ml.azure.com/prompts/flow/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/cff27a23-7f84-4db2-924e-b1431ef4d735/details?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"flowId": "cff27a23-7f84-4db2-924e-b1431ef4d735", "flowName": "flow_display_name",
"description": "test flow description", "tags": {"owner": "sdk-test"}, "flowType":
"Default", "experimentId": "00000000-0000-0000-0000-000000000000", "createdDate":
"2024-01-12T03:09:40.5100793Z", "lastModifiedDate": "2024-01-12T03:09:40.5100793Z",
"owner": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userTenantId":
"00000000-0000-0000-0000-000000000000", "userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"},
"flowResourceId": "azureml://locations/eastus/workspaces/00000/flows/cff27a23-7f84-4db2-924e-b1431ef4d735",
"isArchived": false, "flowDefinitionFilePath": "Users/unknown_user/promptflow/flow_name/flow.dag.yaml"}'
headers:
connection:
- keep-alive
content-length:
- '1100'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '1.060'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-file-share/12.14.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 03:09:40 GMT
x-ms-version:
- '2023-08-03'
method: HEAD
uri: https://fake_account_name.file.core.windows.net/fake-file-share-name/Users/unknown_user/promptflow/flow_name/flow.dag.yaml
response:
body:
string: ''
headers:
content-length:
- '250'
content-type:
- application/octet-stream
last-modified:
- Fri, 12 Jan 2024 03:09:35 GMT
server:
- Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0
x-ms-file-attributes:
- Archive
x-ms-file-change-time:
- '2024-01-12T03:09:35.3614846Z'
x-ms-file-creation-time:
- '2024-01-12T03:09:34.2991527Z'
x-ms-file-id:
- '13835180174087290880'
x-ms-file-last-write-time:
- '2024-01-12T03:09:35.3614846Z'
x-ms-file-parent-id:
- '13835074620971024384'
x-ms-type:
- File
x-ms-version:
- '2023-08-03'
status:
code: 200
message: OK
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_flow_operations_TestFlow_test_create_flow.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_flow_operations_TestFlow_test_create_flow.yaml",
"repo_id": "promptflow",
"token_count": 13998
} | 78 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.049'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.089'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.085'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.112'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:16 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '49'
content-md5:
- quXiEreYvPinSj0HsaNa/g==
content-type:
- application/octet-stream
last-modified:
- Wed, 08 Nov 2023 04:26:09 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 08 Nov 2023 04:26:09 GMT
x-ms-meta-name:
- c4092674-5e53-4c17-b78d-75353ae0edb6
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- 579021dc-8ac8-4c73-8110-4642bd00c69b
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:17 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.087'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.148'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:20 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '245'
content-md5:
- F+JA0a3CxcLYZ0ANRdlZbA==
content-type:
- application/octet-stream
last-modified:
- Wed, 29 Nov 2023 02:51:35 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Thu, 17 Aug 2023 10:30:09 GMT
x-ms-meta-name:
- 56efdd28-6297-4baa-aad3-be46f4b768a2
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:21 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.091'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.110'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:25 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '49'
content-md5:
- quXiEreYvPinSj0HsaNa/g==
content-type:
- application/octet-stream
last-modified:
- Wed, 08 Nov 2023 04:26:09 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 08 Nov 2023 04:26:09 GMT
x-ms-meta-name:
- c4092674-5e53-4c17-b78d-75353ae0edb6
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- 579021dc-8ac8-4c73-8110-4642bd00c69b
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:27 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.105'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.090'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:30 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '245'
content-md5:
- F+JA0a3CxcLYZ0ANRdlZbA==
content-type:
- application/octet-stream
last-modified:
- Wed, 29 Nov 2023 02:51:35 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Thu, 17 Aug 2023 10:30:09 GMT
x-ms-meta-name:
- 56efdd28-6297-4baa-aad3-be46f4b768a2
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Wed, 29 Nov 2023 09:04:31 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_flow_id_in_submission.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_flow_id_in_submission.yaml",
"repo_id": "promptflow",
"token_count": 12049
} | 79 |
{"line_number": 0, "category": "None", "evidence": "None"}
{"line_number": 1, "category": "Academic", "evidence": "Both"}
{"line_number": 2, "category": "App", "evidence": "Both"}
| promptflow/src/promptflow/tests/test_configs/runs/web_classification_variant_0_20231205_120253_104100/flow_outputs/output.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/runs/web_classification_variant_0_20231205_120253_104100/flow_outputs/output.jsonl",
"repo_id": "promptflow",
"token_count": 61
} | 80 |
inputs:
groundtruth:
type: string
prediction:
type: string
outputs:
grade:
type: string
reference: ${grade.output}
nodes:
- name: grade
type: python
source:
type: code
path: grade.py
inputs:
groundtruth: ${inputs.groundtruth}
prediction: ${inputs.prediction}
- name: calculate_accuracy
type: python
source:
type: code
path: calculate_accuracy.py
inputs:
grades: ${grade.output}
activate:
when: ${grade.output}
is: 1
aggregation: true
| promptflow/src/promptflow/tests/test_configs/wrong_flows/aggregation_activate_reference_non_aggregation/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/aggregation_activate_reference_non_aggregation/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 196
} | 81 |
inputs:
num:
type: int
outputs:
content:
type: string
reference: ${divide_num.output}
nodes:
- name: divide_num
source:
type: code
path: divide_num.py
inputs:
num: ${inputs.num}
| promptflow/src/promptflow/tests/test_configs/wrong_flows/node_missing_type_or_source/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/node_missing_type_or_source/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 94
} | 82 |
# Contributing to Prompt Flow
You can contribute to prompt flow with issues and pull requests (PRs). Simply
filing issues for problems you encounter is a great way to contribute. Contributing
code is greatly appreciated.
## Reporting Issues
We always welcome bug reports, API proposals and overall feedback. Here are a few
tips on how you can make reporting your issue as effective as possible.
### Where to Report
New issues can be reported in our [list of issues](https://github.com/microsoft/promptflow/issues).
Before filing a new issue, please search the list of issues to make sure it does
not already exist.
If you do find an existing issue for what you wanted to report, please include
your own feedback in the discussion. Do consider upvoting (👍 reaction) the original
post, as this helps us prioritize popular issues in our backlog.
### Writing a Good Bug Report
Good bug reports make it easier for maintainers to verify and root cause the
underlying problem.
The better a bug report, the faster the problem will be resolved. Ideally, a bug
report should contain the following information:
- A high-level description of the problem.
- A _minimal reproduction_, i.e. the smallest size of code/configuration required
to reproduce the wrong behavior.
- A description of the _expected behavior_, contrasted with the _actual behavior_ observed.
- Information on the environment: OS/distribution, CPU architecture, SDK version, etc.
- Additional information, e.g. Is it a regression from previous versions? Are there
any known workarounds?
## Contributing Changes
Project maintainers will merge accepted code changes from contributors.
### DOs and DON'Ts
DO's:
- **DO** follow the standard coding conventions: [Python](https://pypi.org/project/black/)
- **DO** give priority to the current style of the project or file you're changing
if it diverges from the general guidelines.
- **DO** include tests when adding new features. When fixing bugs, start with
adding a test that highlights how the current behavior is broken.
- **DO** add proper docstring for functions and classes following [API Documentation Guidelines](./docs/dev/documentation_guidelines.md).
- **DO** keep the discussions focused. When a new or related topic comes up
it's often better to create new issue than to side track the discussion.
- **DO** clearly state on an issue that you are going to take on implementing it.
- **DO** blog and tweet (or whatever) about your contributions, frequently!
DON'Ts:
- **DON'T** surprise us with big pull requests. Instead, file an issue and start
a discussion so we can agree on a direction before you invest a large amount of time.
- **DON'T** commit code that you didn't write. If you find code that you think is a good
fit to add to prompt flow, file an issue and start a discussion before proceeding.
- **DON'T** submit PRs that alter licensing related files or headers. If you believe
there's a problem with them, file an issue and we'll be happy to discuss it.
- **DON'T** make new APIs without filing an issue and discussing with us first.
### Breaking Changes
Contributions must maintain API signature and behavioral compatibility. Contributions
that include breaking changes will be rejected. Please file an issue to discuss
your idea or change if you believe that a breaking change is warranted.
### Suggested Workflow
We use and recommend the following workflow:
1. Create an issue for your work, or reuse an existing issue on the same topic.
- Get agreement from the team and the community that your proposed change is
a good one.
- Clearly state that you are going to take on implementing it, if that's the case.
You can request that the issue be assigned to you. Note: The issue filer and
the implementer don't have to be the same person.
2. Create a personal fork of the repository on GitHub (if you don't already have one).
3. In your fork, create a branch off of main (`git checkout -b my_branch`).
- Name the branch so that it clearly communicates your intentions, such as
"issue-123" or "githubhandle-issue".
4. Make and commit your changes to your branch.
5. Add new tests corresponding to your change, if applicable.
6. Run the relevant scripts in [the section below](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md#dev-scripts) to ensure that your build is clean and all tests are passing.
7. Create a PR against the repository's **main** branch.
- State in the description what issue or improvement your change is addressing.
- Link the PR to the issue in step 1.
- Verify that all the Continuous Integration checks are passing.
8. Wait for feedback or approval of your changes from the code maintainers.
- If there is no response for a few days, you can create a new issue to raise awareness.
Promptflow team has triage process toward issues without assignee,
then you can directly contact the issue owner to follow up (e.g. loop related internal reviewer).
9. When area owners have signed off, and all checks are green, your PR will be merged.
### Development scripts
The scripts below are used to build, test, and lint within the project.
- see [doc/dev/dev_setup.md](https://github.com/microsoft/promptflow/blob/main/docs/dev/dev_setup.md).
### PR - CI Process
The continuous integration (CI) system will automatically perform the required
builds and run tests (including the ones you are expected to run) for PRs. Builds
and test runs must be clean.
If the CI build fails for any reason, the PR issue will be updated with a link
that can be used to determine the cause of the failure.
| promptflow/CONTRIBUTING.md/0 | {
"file_path": "promptflow/CONTRIBUTING.md",
"repo_id": "promptflow",
"token_count": 1394
} | 0 |
Tools are the fundamental building blocks of a [flow](./concept-flows.md).
Each tool is an executable unit, basically a function to performs various tasks including but not limited to:
- Accessing LLMs for various purposes
- Querying databases
- Getting information from search engines
- Pre/post processing of data
# Tools
Prompt flow provides 3 basic tools:
- [LLM](../reference/tools-reference/llm-tool.md): The LLM tool allows you to write custom prompts and leverage large language models to achieve specific goals, such as summarizing articles, generating customer support responses, and more.
- [Python](../reference/tools-reference/python-tool.md): The Python tool enables you to write custom Python functions to perform various tasks, such as fetching web pages, processing intermediate data, calling third-party APIs, and more.
- [Prompt](../reference/tools-reference/prompt-tool.md): The Prompt tool allows you to prepare a prompt as a string for more complex use cases or for use in conjunction with other prompt tools or python tools.
## More tools
Our partners also contributes other useful tools for advanced scenarios, here are some links:
- [Vector DB Lookup](../reference/tools-reference/vector_db_lookup_tool.md): vector search tool that allows users to search top k similar vectors from vector database.
- [Faiss Index Lookup](../reference/tools-reference/faiss_index_lookup_tool.md): querying within a user-provided Faiss-based vector store.
## Custom tools
You can create your own tools that can be shared with your team or anyone in the world.
Learn more on [Create and Use Tool Package](../how-to-guides/develop-a-tool/create-and-use-tool-package.md)
## Next steps
For more information on the available tools and their usage, visit the our [reference doc](../reference/index.md). | promptflow/docs/concepts/concept-tools.md/0 | {
"file_path": "promptflow/docs/concepts/concept-tools.md",
"repo_id": "promptflow",
"token_count": 444
} | 1 |
# Develop a flow
We provide guides on how to develop a flow by writing a flow yaml from scratch in this section.
```{toctree}
:maxdepth: 1
:hidden:
develop-standard-flow
develop-chat-flow
develop-evaluation-flow
referencing-external-files-or-folders-in-a-flow
``` | promptflow/docs/how-to-guides/develop-a-flow/index.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-flow/index.md",
"repo_id": "promptflow",
"token_count": 87
} | 2 |
# Manage connections
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
[Connection](../../concepts/concept-connections.md) helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM (Large Language Models) and other external tools, for example, Azure Content Safety.
:::{note}
To use azureml workspace connection locally, refer to [this guide](../how-to-guides/set-global-configs.md#connectionprovider).
:::
## Connection types
There are multiple types of connections supported in promptflow, which can be simply categorized into **strong type connection** and **custom connection**. The strong type connection includes AzureOpenAIConnection, OpenAIConnection, etc. The custom connection is a generic connection type that can be used to store custom defined credentials.
We are going to use AzureOpenAIConnection as an example for strong type connection, and CustomConnection to show how to manage connections.
## Create a connection
:::{note}
If you are using `WSL` or other OS without default keyring storage backend, you may encounter `StoreConnectionEncryptionKeyError`, please refer to [FAQ](./faq.md#connection-creation-failed-with-storeconnectionencryptionkeyerror) for the solutions.
:::
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Each of the strong type connection has a corresponding yaml schema, the example below shows the AzureOpenAIConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: azure_open_ai_connection
type: azure_open_ai
api_key: "<to-be-replaced>"
api_base: "https://<name>.openai.azure.com/"
api_type: "azure"
api_version: "2023-03-15-preview"
```
The custom connection yaml will have two dict fields for secrets and configs, the example below shows the CustomConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
endpoint: "<your-endpoint>"
other_config: "other_value"
secrets: # required
my_key: "<your-api-key>"
```
After preparing the yaml file, use the CLI command below to create them:
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f <path-to-azure-open-ai-connection> --set api_key=<your-api-key>
# Create the custom connection
pf connection create -f <path-to-custom-connection> --set configs.endpoint=<endpoint> secrets.my_key=<your-api-key>
```
The expected result is as follows if the connection created successfully.
![img](../media/how-to-guides/create_connection.png)
:::
:::{tab-item} SDK
:sync: SDK
Using SDK, each connection type has a corresponding class to create a connection. The following code snippet shows how to import the required class and create the connection:
```python
from promptflow import PFClient
from promptflow.entities import AzureOpenAIConnection, CustomConnection
# Get a pf client to manage connections
pf = PFClient()
# Initialize an AzureOpenAIConnection object
connection = AzureOpenAIConnection(
name="my_azure_open_ai_connection",
api_key="<your-api-key>",
api_base="<your-endpoint>"
api_version="2023-03-15-preview"
)
# Create the connection, note that api_key will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
# Initialize a custom connection object
connection = CustomConnection(
name="my_custom_connection",
# Secrets is a required field for custom connection
secrets={"my_key": "<your-api-key>"},
configs={"endpoint": "<your-endpoint>", "other_config": "other_value"}
)
# Create the connection, note that all secret values will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Click the "+" icon on the top right of it and follow the popped out instructions to create your new connection.
![img](../media/how-to-guides/vscode_create_connection.png)
![img](../media/how-to-guides/vscode_create_connection_1.png)
:::
::::
## Update a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
The commands below show how to update existing connections with new values:
```bash
# Update an azure open ai connection with a new api base
pf connection update -n my_azure_open_ai_connection --set api_base='new_value'
# Update a custom connection
pf connection update -n my_custom_connection --set configs.other_config='new_value'
```
:::
:::{tab-item} SDK
:sync: SDK
The code snippet below shows how to update existing connections with new values:
```python
# Update an azure open ai connection with a new api base
connection = pf.connections.get(name="my_azure_open_ai_connection")
connection.api_base = "new_value"
connection.api_key = "<original-key>" # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
# Update a custom connection
connection = pf.connections.get(name="my_custom_connection")
connection.configs["other_config"] = "new_value"
connection.secrets = {"key1": "val1"} # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## List connections
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List connection command will return the connections with json list format, note that all secrets and api keys will be scrubbed:
```bash
pf connection list
```
:::
:::{tab-item} SDK
:sync: SDK
List connection command will return the connections object list, note that all secrets and api keys will be scrubbed:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# List and print connections
connection_list = pf.connections.list()
for connection in connection_list:
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_list_connection.png)
:::
::::
## Delete a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Delete a connection with the following command:
```bash
pf connection delete -n <connection_name>
```
:::
:::{tab-item} SDK
:sync: SDK
Delete a connection with the following code snippet:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# Delete the connection with specific name
client.connections.delete(name="my_custom_connection")
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## Next steps
- Reach more detail about [connection concepts](../../concepts/concept-connections.md).
- Try the [connection samples](https://github.com/microsoft/promptflow/blob/main/examples/connections/connection.ipynb).
- [Consume connections from Azure AI](../cloud/azureai/consume-connections-from-azure-ai.md).
| promptflow/docs/how-to-guides/manage-connections.md/0 | {
"file_path": "promptflow/docs/how-to-guides/manage-connections.md",
"repo_id": "promptflow",
"token_count": 2251
} | 3 |
from datetime import datetime
import time
import requests
import sys
import json
from azure.identity import AzureCliCredential
import logging
from azure.ai.ml import MLClient
from sseclient import SSEClient
class ColoredFormatter(logging.Formatter):
# Color code dictionary
color_codes = {
'debug': '\033[0;32m', # Green
'info': '\033[0;36m', # Cyan
'warning': '\033[0;33m', # Yellow
'error': '\033[0;31m', # Red
'critical': '\033[0;35m', # Magenta
}
def format(self, record):
# Get the original message
message = super().format(record)
# Add color codes
message = f"{self.color_codes.get(record.levelname.lower(), '')}{message}\033[0m"
return message
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.setFormatter(ColoredFormatter())
logger.setLevel(logging.INFO)
logger.addHandler(handler)
def apply_delta(base: dict, delta: dict):
for k, v in delta.items():
if k in base:
base[k] += v
else:
base[k] = v
def score(url, api_key, body, stream=True, on_event=None):
headers = {
"Content-Type": "application/json",
"Authorization": ("Bearer " + api_key),
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
"azureml-model-deployment": "blue",
"Accept": "text/event-stream, application/json" if stream else "application/json"
}
logger.info("Sending HTTP request...")
logger.debug("POST %s", url)
for name, value in headers.items():
if name == "Authorization":
value = "[REDACTED]"
logger.debug(f">>> {name}: {value}")
logger.debug(json.dumps(body, indent=4, ensure_ascii=False))
logger.debug("")
time1 = datetime.now()
response = None
try:
response = requests.post(url, json=body, headers=headers, stream=stream)
response.raise_for_status()
finally:
time2 = datetime.now()
if response is not None:
logger.info(
"Got response: %d %s (elapsed %s)",
response.status_code,
response.reason,
time2 - time1,
)
for name, value in response.headers.items():
logger.debug(f"<<< {name}: {value}")
time1 = datetime.now()
try:
content_type = response.headers.get('Content-Type')
if "text/event-stream" in content_type:
output = {}
client = SSEClient(response)
for event in client.events():
if on_event:
on_event(event)
dct = json.loads(event.data)
apply_delta(output, dct)
return output, True
else:
return response.json(), False
finally:
time2 = datetime.now()
logger.info("\nResponse reading elapsed: %s", time2 - time1)
class ChatApp:
def __init__(self, ml_client, endpoint_name, chat_input_name, chat_output_name, stream=True, debug=False):
self._chat_input_name = chat_input_name
self._chat_output_name = chat_output_name
self._chat_history = []
self._stream = stream
if debug:
logger.setLevel(logging.DEBUG)
logger.info("Getting endpoint info...")
endpoint = ml_client.online_endpoints.get(endpoint_name)
keys = ml_client.online_endpoints.get_keys(endpoint_name)
self._endpoint_url = endpoint.scoring_uri
self._endpoint_key = keys.primary_key if endpoint.auth_mode == "key" else keys.access_token
logger.info(f"Done.")
logger.debug(f"Target endpoint: {endpoint.id}")
@property
def url(self):
return self._endpoint_url
@property
def api_key(self):
return self._endpoint_key
def get_payload(self, chat_input, chat_history=[]):
return {
self._chat_input_name: chat_input,
"chat_history": chat_history,
}
def chat_once(self, chat_input):
def on_event(event):
dct = json.loads(event.data)
answer_delta = dct.get(self._chat_output_name)
if answer_delta:
print(answer_delta, end='')
# We need to flush the output
# otherwise the text does not appear on the console
# unless a new line comes.
sys.stdout.flush()
# Sleep for 20ms for better animation effects
time.sleep(0.02)
try:
payload = self.get_payload(chat_input=chat_input, chat_history=self._chat_history)
output, stream = score(self.url, self.api_key, payload, stream=self._stream, on_event=on_event)
# We don't use self._stream here since the result may not always be the same as self._stream specified.
if stream:
# Print a new line at the end of the content to make sure
# the next logger line will always starts from a new line.
pass
# print("\n")
else:
print(output.get(self._chat_output_name, "<empty>"))
self._chat_history.append({
"inputs": {
self._chat_input_name: chat_input,
},
"outputs": output,
})
logger.info("Length of chat history: %s", len(self._chat_history))
except requests.HTTPError as e:
logger.error(e.response.text)
def chat(self):
while True:
try:
question = input("Chat with Wikipedia:> ")
if question in ("exit", "bye"):
print("Bye.")
break
self.chat_once(question)
except KeyboardInterrupt:
# When pressed Ctrl_C, exit
print("\nBye.")
break
except Exception as e:
logger.exception("An error occurred: %s", e)
# Do not raise the errors out so that we can continue the chat
if __name__ == "__main__":
ml_client = MLClient(
credential=AzureCliCredential(),
# Replace with your subscription ID, resource group name, and workspace name
subscription_id="<your_sub_id>",
resource_group_name="<your_resource_group_name>",
workspace_name="<your_workspace_name>",
)
chat_app = ChatApp(
ml_client=ml_client,
# TODO: Replace with your online endpoint name
endpoint_name="chat-with-wikipedia-stream",
chat_input_name="question",
chat_output_name="answer",
stream=False,
debug=True,
)
chat_app.chat()
| promptflow/docs/media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py/0 | {
"file_path": "promptflow/docs/media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py",
"repo_id": "promptflow",
"token_count": 3156
} | 4 |
# PLACEHOLDER | promptflow/docs/reference/python-library-reference/promptflow.md/0 | {
"file_path": "promptflow/docs/reference/python-library-reference/promptflow.md",
"repo_id": "promptflow",
"token_count": 6
} | 5 |
<jupyter_start><jupyter_text>Configuration_**Setting up your Azure Machine Learning services workspace and configuring needed resources**_------**Requirements** - In order to benefit from this tutorial, you will need:- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)- An Azure ML workspace- A python environment- Install dependent packages for samples via `pip install -r requirements.txt`**Learning Objectives** - By the end of this tutorial, you should be able to:- Connect to your AML workspace from the Python SDK using different auth credentials- Create workspace config file**Motivations** - This notebook covers the scenario that user define components using yaml then use these components to build pipeline. 1. Import the required libraries & set dependent environment variables<jupyter_code># Import required libraries
from promptflow.azure import PFClient<jupyter_output><empty_output><jupyter_text>2. Configure credentialWe are using `DefaultAzureCredential` to get access to workspace. When an access token is needed, it requests one using multiple identities(`EnvironmentCredential, ManagedIdentityCredential, SharedTokenCacheCredential, VisualStudioCodeCredential, AzureCliCredential, AzurePowerShellCredential`) in turn, stopping when one provides a token.Reference [here](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) for more information.`DefaultAzureCredential` should be capable of handling most Azure SDK authentication scenarios. Reference [here](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity?view=azure-python) for all available credentials if it does not work for you.<jupyter_code>from azure.identity import (
InteractiveBrowserCredential,
DefaultAzureCredential,
)
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()<jupyter_output><empty_output><jupyter_text>3. Connect to Azure Machine Learning WorkspaceThe [workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run. Check this notebook for creating a [workspace](../resources/workspace/workspace.ipynb).To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. The config details of a workspace can be saved to a file from the Azure Machine Learning [portal](https://ml.azure.com/). Click on the name of the portal on the top right corner to see the link to save the config file.This config file can be used to load a workspace using `PLClient`. If no path is mentioned, path is defaulted to current folder. If no file name is mentioned, file name will be defaulted to `config.json`<jupyter_code>try:
pf = PFClient.from_config(credential=credential)
except Exception as ex:
# NOTE: Update following workspace information if not correctly configure before
client_config = {
"subscription_id": "<SUBSCRIPTION_ID>",
"resource_group": "<RESOURCE_GROUP>",
"workspace_name": "<AML_WORKSPACE_NAME>",
}
if client_config["subscription_id"].startswith("<"):
print(
"please update your <SUBSCRIPTION_ID> <RESOURCE_GROUP> <AML_WORKSPACE_NAME> in notebook cell"
)
raise ex
else: # write and reload from config file
import json, os
config_path = "../.azureml/config.json"
os.makedirs(os.path.dirname(config_path), exist_ok=True)
with open(config_path, "w") as fo:
fo.write(json.dumps(client_config))
pf = PFClient.from_config(credential=credential, path=config_path)
print(pf)<jupyter_output><empty_output><jupyter_text>4. Retrieve or create an Azure Machine Learning compute targetTo create a Azure Machine Learning job, you need a compute cluster as prerequisite. Below code ensures computes named `cpu-cluster` exists in your workspace.<jupyter_code>from azure.ai.ml import MLClient
from azure.ai.ml.entities import AmlCompute
# MLClient use the same configuration as PFClient
ml_client = MLClient.from_config(credential=credential)
# specify aml compute name.
cpu_compute_target = "cpu-cluster"
try:
ml_client.compute.get(cpu_compute_target)
except Exception:
print("Creating a new cpu compute target...")
compute = AmlCompute(
name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4
)
ml_client.compute.begin_create_or_update(compute).result()
# TODO: set up connections<jupyter_output><empty_output> | promptflow/examples/configuration.ipynb/0 | {
"file_path": "promptflow/examples/configuration.ipynb",
"repo_id": "promptflow",
"token_count": 1507
} | 6 |
# Chat with PDF
This is a simple flow that allow you to ask questions about the content of a PDF file and get answers.
You can run the flow with a URL to a PDF file and question as argument.
Once it's launched it will download the PDF and build an index of the content.
Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer.
Learn more on corresponding [tutorials](../../../tutorials/e2e-development/chat-with-pdf.md).
Tools used in this flow:
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Get started
### Create connection in this folder
```bash
# create connection needed by flow
if pf connection list | grep open_ai_connection; then
echo "open_ai_connection already exists"
else
pf connection create --file ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
fi
```
### CLI Example
#### Run flow
**Note**: this sample uses [predownloaded PDFs](./chat_with_pdf/.pdfs/) and [prebuilt FAISS Index](./chat_with_pdf/.index/) to speed up execution time.
You can remove the folders to start a fresh run.
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs question="What is the name of the new language representation model introduced in the document?" pdf_url="https://arxiv.org/pdf/1810.04805.pdf"
# (Optional) create a random run name
run_name="web_classification_"$(openssl rand -hex 12)
# run with multiline data, --name is optional
pf run create --file batch_run.yaml --name $run_name
# visualize run output details
pf run visualize --name $run_name
```
#### Submit run to cloud
Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
``` bash
# create run
pfazure run create --file batch_run.yaml --name $run_name
```
Note: Click portal_url of the run to view the final snapshot.
| promptflow/examples/flows/chat/chat-with-pdf/README.md/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/README.md",
"repo_id": "promptflow",
"token_count": 686
} | 7 |
<jupyter_start><jupyter_text>Chat with PDF - test, evaluation and experimentationWe will walk you through how to use prompt flow Python SDK to test, evaluate and experiment with the "Chat with PDF" flow. 0. Install dependencies<jupyter_code>%pip install -r requirements.txt<jupyter_output><empty_output><jupyter_text>1. Create connectionsConnection in prompt flow is for managing settings of your application behaviors incl. how to talk to different services (Azure OpenAI for example).<jupyter_code>import promptflow
pf = promptflow.PFClient()
# List all the available connections
for c in pf.connections.list():
print(c.name + " (" + c.type + ")")<jupyter_output><empty_output><jupyter_text>You will need to have a connection named "open_ai_connection" to run the chat_with_pdf flow.<jupyter_code># create needed connection
from promptflow.entities import AzureOpenAIConnection, OpenAIConnection
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<user-input>",
api_base="<test_base>",
api_type="azure",
api_version="<test_version>",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)<jupyter_output><empty_output><jupyter_text>2. Test the flow**Note**: this sample uses [predownloaded PDFs](./chat_with_pdf/.pdfs/) and [prebuilt FAISS Index](./chat_with_pdf/.index/) to speed up execution time.You can remove the folders to start a fresh run.<jupyter_code>output = pf.flows.test(
".",
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "what is BERT?",
},
)
print(output)<jupyter_output><empty_output><jupyter_text>3. Run the flow with a data file<jupyter_code>flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_2k_context)
print(run_2k_context)
pf.get_details(run_2k_context)<jupyter_output><empty_output><jupyter_text>4. Evaluate the "groundedness"The [eval-groundedness flow](../../evaluation/eval-groundedness/) is using ChatGPT/GPT4 model to grade the answers generated by chat-with-pdf flow.<jupyter_code>eval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)
pf.get_details(eval_groundedness_2k_context)
pf.get_metrics(eval_groundedness_2k_context)
pf.visualize(eval_groundedness_2k_context)<jupyter_output><empty_output><jupyter_text>You will see a web page like this. It gives you detail about how each row is graded and even the details how the evaluation run executes:![pf-visualize-screenshot](./assets/pf-visualize-screenshot.png) 5. Try a different configuration and evaluate again - experimentationNOTE: since we only use 3 lines of test data in this example, and because of the non-deterministic nature of LLMs, don't be surprised if you see exact same metrics when you run this process.<jupyter_code>config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 3000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
run_3k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_3k_context)
print(run_3k_context)
eval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)
pf.get_details(eval_groundedness_3k_context)
pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context])<jupyter_output><empty_output> | promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb",
"repo_id": "promptflow",
"token_count": 2049
} | 8 |
from jinja2 import Environment, FileSystemLoader
import os
from utils.logging import log
from utils.oai import OAIChat, render_with_token_limit
def rewrite_question(question: str, history: list):
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template("rewrite_question_prompt.md")
token_limit = int(os.environ["PROMPT_TOKEN_LIMIT"])
max_completion_tokens = int(os.environ["MAX_COMPLETION_TOKENS"])
# Try to render the prompt with token limit and reduce the history count if it fails
while True:
try:
prompt = render_with_token_limit(
template, token_limit, question=question, history=history
)
break
except ValueError:
history = history[:-1]
log(f"Reducing chat history count to {len(history)} to fit token limit")
chat = OAIChat()
rewritten_question = chat.generate(
messages=[{"role": "user", "content": prompt}], max_tokens=max_completion_tokens
)
log(f"Rewritten question: {rewritten_question}")
return rewritten_question
| promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py",
"repo_id": "promptflow",
"token_count": 444
} | 9 |
from promptflow import tool
from chat_with_pdf.find_context import find_context
@tool
def find_context_tool(question: str, index_path: str):
prompt, context = find_context(question, index_path)
return {"prompt": prompt, "context": [c.text for c in context]}
| promptflow/examples/flows/chat/chat-with-pdf/find_context_tool.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/find_context_tool.py",
"repo_id": "promptflow",
"token_count": 84
} | 10 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
default: What is ChatGPT?
is_chat_input: true
outputs:
answer:
type: string
reference: ${augmented_chat.output}
is_chat_output: true
nodes:
- name: extract_query_from_question
type: llm
source:
type: code
path: extract_query_from_question.jinja2
inputs:
# This is for easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
temperature: '0.7'
top_p: '1.0'
stop: ''
max_tokens: '256'
presence_penalty: '0'
frequency_penalty: '0'
logit_bias: ''
question: ${inputs.question}
chat_history: ${inputs.chat_history}
connection: open_ai_connection
api: chat
- name: get_wiki_url
type: python
source:
type: code
path: get_wiki_url.py
inputs:
entity: ${extract_query_from_question.output}
count: '2'
- name: search_result_from_url
type: python
source:
type: code
path: search_result_from_url.py
inputs:
url_list: ${get_wiki_url.output}
count: '10'
- name: process_search_result
type: python
source:
type: code
path: process_search_result.py
inputs:
search_result: ${search_result_from_url.output}
- name: augmented_chat
type: llm
source:
type: code
path: augmented_chat.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
temperature: '0.8'
question: ${inputs.question}
chat_history: ${inputs.chat_history}
contexts: ${process_search_result.output}
connection: open_ai_connection
api: chat
environment:
python_requirements_txt: requirements.txt
| promptflow/examples/flows/chat/chat-with-wikipedia/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-wikipedia/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 777
} | 11 |
from typing import List
from promptflow import tool
@tool
def aggregate(processed_results: List[str]):
"""
This tool aggregates the processed result of all lines to the variant level and log metric for each variant.
:param processed_results: List of the output of line_process node.
"""
# Add your aggregation logic here
# aggregated_results should be a dictionary with the metric name as the key and the metric value as the value.
results_num = len(processed_results)
print(results_num)
print(processed_results)
# Log metric for each variant
from promptflow import log_metric
log_metric(key="results_num", value=results_num)
return results_num
| promptflow/examples/flows/evaluation/eval-basic/aggregate.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-basic/aggregate.py",
"repo_id": "promptflow",
"token_count": 210
} | 12 |
system:
You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric.
user:
You will be presented with a CONTEXT and an ANSWER about that CONTEXT. You need to decide whether the ANSWER is entailed by the CONTEXT by choosing one of the following rating:
1. 5: The ANSWER follows logically from the information contained in the CONTEXT.
2. 1: The ANSWER is logically false from the information contained in the CONTEXT.
3. an integer score between 1 and 5 and if such integer score does not exist, use 1: It is not possible to determine whether the ANSWER is true or false without further information. Read the passage of information thoroughly and select the correct answer from the three answer labels. Read the CONTEXT thoroughly to ensure you know what the CONTEXT entails. Note the ANSWER is generated by a computer system, it can contain certain symbols, which should not be a negative factor in the evaluation.
Independent Examples:
## Example Task #1 Input:
{"CONTEXT": "Some are reported as not having been wanted at all.", "QUESTION": "", "ANSWER": "All are reported as being completely and fully wanted."}
## Example Task #1 Output:
1
## Example Task #2 Input:
{"CONTEXT": "Ten new television shows appeared during the month of September. Five of the shows were sitcoms, three were hourlong dramas, and two were news-magazine shows. By January, only seven of these new shows were still on the air. Five of the shows that remained were sitcoms.", "QUESTION": "", "ANSWER": "At least one of the shows that were cancelled was an hourlong drama."}
## Example Task #2 Output:
5
## Example Task #3 Input:
{"CONTEXT": "In Quebec, an allophone is a resident, usually an immigrant, whose mother tongue or home language is neither French nor English.", "QUESTION": "", "ANSWER": "In Quebec, an allophone is a resident, usually an immigrant, whose mother tongue or home language is not French."}
## Example Task #3 Output:
5
## Example Task #4 Input:
{"CONTEXT": "Some are reported as not having been wanted at all.", "QUESTION": "", "ANSWER": "All are reported as being completely and fully wanted."}
## Example Task #4 Output:
1
## Actual Task Input:
{"CONTEXT": {{context}}, "QUESTION": "", "ANSWER": {{answer}}}
Reminder: The return values for each task should be correctly formatted as an integer between 1 and 5. Do not repeat the context and question.
Actual Task Output: | promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_groundedness_prompt.jinja2/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_groundedness_prompt.jinja2",
"repo_id": "promptflow",
"token_count": 617
} | 13 |
system:
You are a helpful assistant.
user:
A chat history between user and bot is shown below
A list of documents is shown below in json format, and each document has one unique id.
These listed documents are used as contex to answer the given question.
The task is to score the relevance between the documents and the potential answer to the given question in the range of 1 to 5.
1 means none of the documents is relevant to the question at all. 5 means either one of the document or combination of a few documents is ideal for answering the given question.
Think through step by step:
- Summarize each given document first
- Determine the underlying intent of the given question, when the question is ambiguous, refer to the given chat history
- Measure how suitable each document to the given question, list the document id and the corresponding relevance score.
- Summarize the overall relevance of given list of documents to the given question after # Overall Reason, note that the answer to the question can solely from single document or a combination of multiple documents.
- Finally, output "# Result" followed by a score from 1 to 5.
# Question
{{question}}
# Chat History
# Documents
---BEGIN RETRIEVED DOCUMENTS---
{{FullBody}}
---END RETRIEVED DOCUMENTS--- | promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_retrieval_prompt.jinja2/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_retrieval_prompt.jinja2",
"repo_id": "promptflow",
"token_count": 289
} | 14 |
# Multi Intent Conversational Language Understanding
A flow that can be used to determine multiple intents in a user query leveraging an LLM with Conversational Language Understanding.
This sample flow utilizes Azure AI Language's Conversational Language Understanding to perform various analyses on text or documents. It performs:
- Breakdown of compound multi intent user queries into single user queries using an LLM.
- [Conversational Language Understanding](https://learn.microsoft.com/en-us/azure/ai-services/language-service/conversational-language-understanding/overview) on each of those single user queries.
See the [promptflow-azure-ai-language](https://github.com/microsoft/promptflow/blob/main/docs/integrations/tools/azure_ai_language_tool.md) tool package reference documentation for further information.
Tools used in this flow:
- `LLM` tool
- `conversational_language_understanding` tool from the `promptflow-azure-ai-language` package
Connections used in this flow:
- `Custom` connection
## Prerequisites
Install promptflow sdk and other dependencies:
```
pip install -r requirements.txt
```
## Setup connection
Prepare your [Azure AI Language Resource](https://azure.microsoft.com/en-us/products/ai-services/ai-language) first, and [create a Language Resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) if necessary. Import the accompanying MediaPlayer.json into a CLU app, train the app and deploy. From your Language Resource, obtain its `api_key` and `endpoint`.
Create a connection to your Language Resource. The connection uses the `CustomConnection` schema:
```
# Override keys with --set to avoid yaml file changes
pf connection create -f ../connections/azure_ai_language.yml --set secrets.api_key=<your_api_key> configs.endpoint=<your_endpoint> name=azure_ai_language_connection
```
Ensure you have created the `azure_ai_language_connection`:
```
pf connection show -n azure_ai_language_connection
```
## Run flow
```
# Test with default input values in flow.dag.yaml:
pf flow test --flow .
```
### Flow description
The flow uses a `llm` node to break down compound user queries into simple user queries. For example, "Play some blues rock and turn up the volume" will be broken down to "["Play some blues rock", "Turn Up the volume"]".
This is then passed into the CLU tool to recognize intents and entities in each of the utterances.
### Contact
Please reach out to Abhishek Sen (<[email protected]>) or <[email protected]> with any issues. | promptflow/examples/flows/integrations/azure-ai-language/multi_intent_conversational_language_understanding/README.md/0 | {
"file_path": "promptflow/examples/flows/integrations/azure-ai-language/multi_intent_conversational_language_understanding/README.md",
"repo_id": "promptflow",
"token_count": 679
} | 15 |
import time
from typing import List
import re
import tiktoken
import logging
import sys
import json
FORMATTER = logging.Formatter(
fmt="[%(asctime)s] %(name)-8s %(levelname)-8s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S %z",
)
def get_logger(name: str, level=logging.INFO) -> logging.Logger:
logger = logging.Logger(name)
# log to sys.stdout for backward compatibility.
# TODO: May need to be removed in the future, after local/blob file stream are fully supported.
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(FORMATTER)
logger.addHandler(stdout_handler)
logger.setLevel(level)
return logger
def parse_reply(text: str):
try:
parsed = json.loads(text, strict=False)
except json.JSONDecodeError:
preprocessed_text = preprocess_json_input(text)
try:
parsed = json.loads(preprocessed_text, strict=False)
except Exception:
return {"Error": f"Could not parse invalid json: {text}"}
except TypeError:
return {"Error": f"the JSON object must be str, bytes or bytearray not {type(text)}"}
return parsed
def count_message_tokens(
messages: List, model: str = "gpt-3.5-turbo-0301"
) -> int:
"""
Returns the number of tokens used by a list of messages.
Args:
messages (list): A list of messages, each of which is a dictionary
containing the role and content of the message.
model (str): The name of the model to use for tokenization.
Defaults to "gpt-3.5-turbo-0301".
Returns:
int: The number of tokens used by the list of messages.
"""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo":
# !Note: gpt-3.5-turbo may change over time.
# Returning num tokens assuming gpt-3.5-turbo-0301.")
return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
# !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
return count_message_tokens(messages, model="gpt-4-0314")
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = (
4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
)
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(
f"num_tokens_from_messages() is not implemented for model {model}.\n"
" See https://github.com/openai/openai-python/blob/main/chatml.md for"
" information on how messages are converted to tokens."
)
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
def count_string_tokens(string: str, model_name="gpt-3.5-turbo") -> int:
"""
Returns the number of tokens in a text string.
Args:
string (str): The text string.
model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
Returns:
int: The number of tokens in the text string.
"""
encoding = tiktoken.encoding_for_model(model_name)
return len(encoding.encode(string))
def create_chat_message(role, content, name=None):
"""
Create a chat message with the given role and content.
Args:
role (str): The role of the message sender, e.g., "system", "user", or "assistant".
content (str): The content of the message.
Returns:
dict: A dictionary containing the role and content of the message.
"""
if name is None:
return {"role": role, "content": content}
else:
return {"role": role, "name": name, "content": content}
def generate_context(prompt, full_message_history, user_prompt, model="gpt-3.5-turbo"):
current_context = [
create_chat_message("system", prompt),
create_chat_message(
"system", f"The current time and date is {time.strftime('%c')}"
),
create_chat_message("user", user_prompt),
]
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
insertion_index = len(current_context)
# Count the currently used tokens
current_tokens_used = count_message_tokens(current_context, model)
return (
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
)
def preprocess_json_input(input_str: str) -> str:
# Replace single backslashes with double backslashes, while leaving already escaped ones intact
corrected_str = re.sub(r'(?<!\\)\\(?!["\\/bfnrt]|u[0-9a-fA-F]{4})', r"\\\\", input_str)
return corrected_str
def construct_prompt(current_context):
update_current_context = []
for item in current_context:
role = item.get("role", None)
content = item.get("content", None)
name = item.get("name", None)
if name is not None:
update_current_context.append(":\n".join([role, "name", name]) + "\n" + ":\n".join(["content", content]))
else:
update_current_context.append(":\n".join([role, content]))
update_current_context = "\n".join(update_current_context)
return update_current_context
| promptflow/examples/flows/standard/autonomous-agent/util.py/0 | {
"file_path": "promptflow/examples/flows/standard/autonomous-agent/util.py",
"repo_id": "promptflow",
"token_count": 2326
} | 16 |