Skip to content

torch

torch

Attributes

__all__ module-attribute

__all__ = ['YOLOv5SegmentationPipeline', 'YOLOSegmentationMixin']

Classes

YOLOSegmentationMixin

Bases: YOLODetectionMixin

Shared YOLOv5 segmentation logic (extends detection).

Attributes:

Name Type Description
image_size tuple[int, int]

Target image size.

stride int

Model stride.

conf_threshold float

Confidence threshold for detections.

iou_threshold float

IoU threshold for NMS.

class_names dict[int, str]

Mapping from class ID to class name.

YOLOv5SegmentationPipeline

YOLOv5SegmentationPipeline(runtime: Runtime[Tensor, tuple[Tensor, Tensor]], image_size: tuple[int, int] = (640, 640), stride: int = 32, conf_threshold: float = 0.25, iou_threshold: float = 0.45, class_names: dict[int, str] | None = None, batch_strategy: BatchStrategy[Tensor, tuple[Tensor, Tensor]] | None = None)

Bases: YOLOSegmentationMixin, Pipeline[Tensor, tuple[Tensor, Tensor], list[SegmentationOutput]]

YOLOv5 instance segmentation pipeline (sync version).

Performs
  • Image decoding and conversion
  • Image resizing and normalization
  • YOLOv5 inference
  • Instance segmentation mask extraction with NMS

Parameters:

Name Type Description Default
runtime Runtime[Tensor, tuple[Tensor, Tensor]]

Inference runtime.

required
image_size tuple[int, int]

Target image size (default: (640, 640)).

(640, 640)
stride int

Model stride (default: 32).

32
conf_threshold float

Confidence threshold for detections (default: 0.25).

0.25
iou_threshold float

IoU threshold for NMS (default: 0.45).

0.45
class_names dict[int, str] | None

Optional mapping from class ID to class name.

None
batch_strategy BatchStrategy[Tensor, tuple[Tensor, Tensor]] | None

Optional batching strategy.

None
Example
runtime = TorchScriptRuntime(
    model_path="yolov5s.pt", device="cuda"
)
pipeline = YOLOv5SegmentationPipeline(
    runtime=runtime,
    class_names={0: "person", 1: "bicycle", 2: "car"},
)
with pipeline.serve():
    results = pipeline(image_bytes)
    for result in results:
        print(
            f"{result.class_name}: {result.confidence:.2%} at {result.box.to_xywh()}"
        )
Source code in inferflow/pipeline/segmentation/torch.py
def __init__(
    self,
    runtime: Runtime[torch.Tensor, tuple[torch.Tensor, torch.Tensor]],
    image_size: tuple[int, int] = (640, 640),
    stride: int = 32,
    conf_threshold: float = 0.25,
    iou_threshold: float = 0.45,
    class_names: dict[int, str] | None = None,
    batch_strategy: BatchStrategy[torch.Tensor, tuple[torch.Tensor, torch.Tensor]] | None = None,
):
    super().__init__(runtime=runtime, batch_strategy=batch_strategy)

    self.image_size = image_size
    self.stride = stride
    self.conf_threshold = conf_threshold
    self.iou_threshold = iou_threshold
    self.class_names = class_names or {}

    self._original_size = None
    self._padding = None
Attributes
image_size instance-attribute
image_size = image_size
stride instance-attribute
stride = stride
conf_threshold instance-attribute
conf_threshold = conf_threshold
iou_threshold instance-attribute
iou_threshold = iou_threshold
class_names instance-attribute
class_names = class_names or {}
Functions
preprocess
preprocess(input: ImageInput) -> Tensor

Preprocess image input for YOLOv5-seg.

Parameters:

Name Type Description Default
input ImageInput

Raw image input (bytes, numpy array, PIL Image, or tensor).

required

Returns:

Type Description
Tensor

Preprocessed tensor ready for model inference.

Source code in inferflow/pipeline/segmentation/torch.py
def preprocess(self, input: ImageInput) -> torch.Tensor:
    """Preprocess image input for YOLOv5-seg.

    Args:
        input: Raw image input (bytes, numpy array, PIL Image, or tensor).

    Returns:
        Preprocessed tensor ready for model inference.
    """
    image = self._convert_to_numpy(input)
    return self._preprocess_numpy(image)
postprocess
postprocess(raw: tuple[Tensor, Tensor]) -> list[SegmentationOutput]

Postprocess YOLOv5-Seg output to segmentation results.

Parameters:

Name Type Description Default
raw tuple[Tensor, Tensor]

Raw model output (detections and protos).

required

Returns:

Type Description
list[SegmentationOutput]

SegmentationOutput list with masks and bounding boxes.

Source code in inferflow/pipeline/segmentation/torch.py
def postprocess(self, raw: tuple[torch.Tensor, torch.Tensor]) -> list[SegmentationOutput]:
    """Postprocess YOLOv5-Seg output to segmentation results.

    Args:
        raw: Raw model output (detections and protos).

    Returns:
        SegmentationOutput list with masks and bounding boxes.
    """
    detections, protos = raw
    return self._postprocess_segmentation(detections, protos)

Functions