numbmelon commited on
Commit
67bccd1
·
verified ·
1 Parent(s): ec44dfb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -3
README.md CHANGED
@@ -5,16 +5,27 @@ base_model: OpenGVLab/InternVL2-4B
5
  pipeline_tag: image-text-to-text
6
  ---
7
 
8
- This repository contains the model of the paper [OS-ATLAS: A Foundation Action Model for Generalist GUI Agents](https://huggingface.co/papers/2410.23218).
9
 
10
  <div align="center">
11
 
12
- [\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
13
 
14
  </div>
15
 
 
16
  ![os-atlas](https://github.com/user-attachments/assets/cf2ee020-5e15-4087-9a7e-75cc43662494)
17
 
 
 
 
 
 
 
 
 
 
 
18
  ## Quick Start
19
  OS-Atlas-Base-4B is a GUI grounding model finetuned from [InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B).
20
 
@@ -27,6 +38,8 @@ pip install transformers
27
  ```
28
  For additional dependencies, please refer to the [InternVL2 documentation](https://internvl.readthedocs.io/en/latest/get_started/installation.html)
29
 
 
 
30
  Inference code example:
31
  ```python
32
  import numpy as np
@@ -119,7 +132,7 @@ model = AutoModel.from_pretrained(
119
  tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
120
 
121
  # set the max number of tiles in `max_num`
122
- pixel_values = load_image('https://github.com/OS-Copilot/OS-Atlas/blob/main/exmaples/images/web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png', max_num=6).to(torch.bfloat16).cuda()
123
  generation_config = dict(max_new_tokens=1024, do_sample=True)
124
 
125
  question = "In the screenshot of this web page, please give me the coordinates of the element I want to click on according to my instructions(with point).\n\"'Champions League' link\""
 
5
  pipeline_tag: image-text-to-text
6
  ---
7
 
8
+ # OS-Atlas: A Foundation Action Model For Generalist GUI Agents
9
 
10
  <div align="center">
11
 
12
+ [\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045)[\[🤗Data\]](https://huggingface.co/datasets/OS-Copilot/OS-Atlas-data) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
13
 
14
  </div>
15
 
16
+ ## Overview
17
  ![os-atlas](https://github.com/user-attachments/assets/cf2ee020-5e15-4087-9a7e-75cc43662494)
18
 
19
+ OS-Atlas provides a series of models specifically designed for GUI agents.
20
+
21
+ For GUI grounding tasks, you can use:
22
+ - [OS-Atlas-Base-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-7B)
23
+ - [OS-Atlas-Base-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-4B)
24
+
25
+ For generating single-step actions in GUI agent tasks, you can use:
26
+ - [OS-Atlas-Pro-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-7B)
27
+ - [OS-Atlas-Pro-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-4B)
28
+
29
  ## Quick Start
30
  OS-Atlas-Base-4B is a GUI grounding model finetuned from [InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B).
31
 
 
38
  ```
39
  For additional dependencies, please refer to the [InternVL2 documentation](https://internvl.readthedocs.io/en/latest/get_started/installation.html)
40
 
41
+ Then download the [example image](https://github.com/OS-Copilot/OS-Atlas/blob/main/examples/images/web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png) and save it to the current directory.
42
+
43
  Inference code example:
44
  ```python
45
  import numpy as np
 
132
  tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
133
 
134
  # set the max number of tiles in `max_num`
135
+ pixel_values = load_image('./web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png', max_num=6).to(torch.bfloat16).cuda()
136
  generation_config = dict(max_new_tokens=1024, do_sample=True)
137
 
138
  question = "In the screenshot of this web page, please give me the coordinates of the element I want to click on according to my instructions(with point).\n\"'Champions League' link\""