File size: 2,596 Bytes
9f59317
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import streamlit as st
from ultralytics import YOLO
import numpy as np
import cv2

# Load models
model = YOLO("best-3.pt")  # load a custom model for segmentation (protection zone)
model2 = YOLO('yolo11s.pt')  # load a second model for object detection

# Streamlit app title
st.title("Protection Zone and Object Detection")

# Upload image
uploaded_file = st.file_uploader("Choose an image...", type=["jpg", "jpeg", "png"])

if uploaded_file is not None:
    # Read the image
    image = uploaded_file.read()
    image_np = np.frombuffer(image, np.uint8)
    image_cv = cv2.imdecode(image_np, cv2.IMREAD_COLOR)
    
    # Predict protection zone with the first model
    segment_results = model(image_cv)  # predict segments
    protection_mask = np.zeros(image_cv.shape[:2], dtype=np.uint8)  # create an empty mask
    
    for result in segment_results:
        if result.masks is not None:
            for segment in result.masks.data:
                # Convert segment to numpy array and ensure it's the right shape and type
                segment_array = segment.cpu().numpy().astype(np.uint8)
                segment_array = cv2.resize(segment_array, (image_cv.shape[1], image_cv.shape[0]))
                protection_mask = cv2.bitwise_or(protection_mask, segment_array * 255)
    
    # Create a copy of the original image to draw on
    output_image = image_cv.copy()
    
    # Overlay the protection zone mask on the output image
    protection_overlay = cv2.applyColorMap(protection_mask, cv2.COLORMAP_COOL)
    output_image = cv2.addWeighted(output_image, 0.7, protection_overlay, 0.3, 0)
    
    # Predict objects with the second model
    object_results = model2(image_cv)  # predict objects using model2
    
    for result in object_results:
        boxes = result.boxes.xyxy.cpu().numpy().astype(int)
        for box in boxes:
            x1, y1, x2, y2 = box
            # Check if the object is within the protection zone
            object_mask = np.zeros(image_cv.shape[:2], dtype=np.uint8)
            object_mask[y1:y2, x1:x2] = 1  # create a mask for the object
            # Check overlap
            overlap = cv2.bitwise_and(protection_mask, object_mask)
            color = (0, 0, 255) if np.sum(overlap) > 0 else (0, 255, 0)  # red if in zone, green if outside
            # Draw bounding box around the object
            cv2.rectangle(output_image, (x1, y1), (x2, y2), color, 2)
    
    # Display the final image
    st.image(output_image, caption="Protection Zone and Detected Objects", channels="BGR")
else:
    st.write("Please upload an image to process.")