demoIAZIKA / app.py
cesar's picture
Update app.py
3dbd4b1
raw
history blame
3.53 kB
# -*- coding: utf-8 -*-
"""Deploy Barcelo demo.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1FxaL8DcYgvjPrWfWruSA5hvk3J81zLY9
![ ](https://www.vicentelopez.gov.ar/assets/images/logo-mvl.png)
# Modelo
YOLO es una familia de modelos de detecci贸n de objetos a escala compuesta entrenados en COCO dataset, e incluye una funcionalidad simple para Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.
## Gradio Inferencia
![](https://i.ibb.co/982NS6m/header.png)
Este Notebook se acelera opcionalmente con un entorno de ejecuci贸n de GPU
----------------------------------------------------------------------
YOLOv5 Gradio demo
*Author: Ultralytics LLC and Gradio*
# C贸digo
"""
#!pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt gradio # install dependencies
import gradio as gr
import torch
from PIL import Image
# Images
torch.hub.download_url_to_file('https://i.pinimg.com/originals/7f/5e/96/7f5e9657c08aae4bcd8bc8b0dcff720e.jpg', 'ejemplo1.jpg')
torch.hub.download_url_to_file('https://i.pinimg.com/originals/c2/ce/e0/c2cee05624d5477ffcf2d34ca77b47d1.jpg', 'ejemplo2.jpg')
# Model
#model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # force_reload=True to update
model = torch.hub.load('ultralytics/yolov5', 'custom', path='./best.pt') # local model o google colab
#model = torch.hub.load('path/to/yolov5', 'custom', path='/content/yolov56.pt', source='local') # local repo
def yolo(im, size=640):
g = (size / max(im.size)) # gain
im = im.resize((int(x * g) for x in im.size), Image.ANTIALIAS) # resize
results = model(im) # inference
results.render() # updates results.imgs with boxes and labels
return Image.fromarray(results.imgs[0])
inputs = gr.inputs.Image(type='pil', label=" Imagen Original")
outputs = gr.outputs.Image(type="pil", label="Resultado")
title = 'Trampas Barcel贸'
description = """
<p>
<center>
Sistemas de Desarrollado por Subsecretar铆a de Innovaci贸n del Municipio de Vicente Lopez. Advertencia solo usar fotos provenientes de las trampas Barcel贸, no de celular o foto de internet.
<img src="https://www.vicentelopez.gov.ar/assets/images/logo-mvl.png" alt="logo" width="250"/>
</center>
</p>
"""
article = "<p style='text-align: center'>YOLOv5 is a family of compound-scaled object detection models trained on the COCO dataset, and includes " \
"simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, " \
"and export to ONNX, CoreML and TFLite. <a href='https://colab.research.google.com/drive/1fbeB71yD09WK2JG9P3Ladu9MEzQ2rQad?usp=sharing'>Source code</a> |" \
"<a href='https://colab.research.google.com/drive/1FxaL8DcYgvjPrWfWruSA5hvk3J81zLY9?usp=sharing'>Colab Deploy</a> | <a href='https://github.com/ultralytics/yolov5'>PyTorch Hub</a></p>"
examples = [['ejemplo1.jpg'], ['ejemplo2.jpg']]
gr.Interface(yolo, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch(
debug=True)
"""For YOLOv5 PyTorch Hub inference with **PIL**, **OpenCV**, **Numpy** or **PyTorch** inputs please see the full [YOLOv5 PyTorch Hub Tutorial](https://github.com/ultralytics/yolov5/issues/36).
## Citation
[![DOI](https://zenodo.org/badge/264818686.svg)](https://zenodo.org/badge/latestdoi/264818686)
"""