Skip to content

torch

torch

Attributes

__all__ module-attribute

__all__ = ['YOLOv5DetectionPipeline']

Classes

YOLOv5DetectionPipeline

YOLOv5DetectionPipeline(runtime: Runtime[Tensor, tuple[Tensor, ...]], image_size: tuple[int, int] = (640, 640), stride: int = 32, conf_threshold: float = 0.25, iou_threshold: float = 0.45, class_names: dict[int, str] | None = None, batch_strategy: BatchStrategy[Tensor, tuple[Tensor, ...]] | None = None)

Bases: YOLODetectionMixin, Pipeline[Tensor, tuple[Tensor, ...], list[DetectionOutput]]

YOLOv5 object detection pipeline (async version).

Performs
  • Image decoding and conversion
  • Resizing and normalization
  • Model inference
  • Bounding box extraction with NMS

Parameters:

Name Type Description Default
runtime Runtime[Tensor, tuple[Tensor, ...]]

Inference runtime.

required
image_size tuple[int, int]

Target image size (default: (640, 640)).

(640, 640)
stride int

Model stride (default: 32).

32
conf_threshold float

Confidence threshold for detections (default: 0.25).

0.25
iou_threshold float

IoU threshold for NMS (default: 0.45).

0.45
class_names dict[int, str] | None

Optional mapping from class ID to class name.

None
batch_strategy BatchStrategy[Tensor, tuple[Tensor, ...]] | None

Optional batching strategy.

None
Example
runtime = TorchScriptRuntime(
    model_path="yolov5s.pt", device="cuda"
)
pipeline = YOLOv5DetectionPipeline(
    runtime=runtime,
    class_names={0: "person", 1: "bicycle", 2: "car"},
)
async with pipeline.serve():
    results = await pipeline(image_bytes)
    for det in results:
        print(
            f"{det.class_name}: {det.confidence:.2%} at {det.box.to_xywh()}"
        )
Source code in inferflow/asyncio/pipeline/detection/torch.py
def __init__(
    self,
    runtime: Runtime[torch.Tensor, tuple[torch.Tensor, ...]],
    image_size: tuple[int, int] = (640, 640),
    stride: int = 32,
    conf_threshold: float = 0.25,
    iou_threshold: float = 0.45,
    class_names: dict[int, str] | None = None,
    batch_strategy: BatchStrategy[torch.Tensor, tuple[torch.Tensor, ...]] | None = None,
):
    super().__init__(runtime=runtime, batch_strategy=batch_strategy)

    self.image_size = image_size
    self.stride = stride
    self.conf_threshold = conf_threshold
    self.iou_threshold = iou_threshold
    self.class_names = class_names or {}

    self._original_size = None
    self._padding = None
Attributes
image_size instance-attribute
image_size = image_size
stride instance-attribute
stride = stride
conf_threshold instance-attribute
conf_threshold = conf_threshold
iou_threshold instance-attribute
iou_threshold = iou_threshold
class_names instance-attribute
class_names = class_names or {}
Functions
preprocess async
preprocess(input: ImageInput) -> Tensor

Preprocess image input for YOLOv5.

Parameters:

Name Type Description Default
input ImageInput

Raw image input (bytes, numpy array, PIL Image, or tensor).

required

Returns:

Type Description
Tensor

Preprocessed tensor ready for model inference.

Source code in inferflow/asyncio/pipeline/detection/torch.py
async def preprocess(self, input: ImageInput) -> torch.Tensor:
    """Preprocess image input for YOLOv5.

    Args:
        input: Raw image input (bytes, numpy array, PIL Image, or tensor).

    Returns:
        Preprocessed tensor ready for model inference.
    """
    image = self._convert_to_numpy(input)
    return self._preprocess_numpy(image)
postprocess async
postprocess(raw: tuple[Tensor, ...]) -> list[DetectionOutput]

Postprocess YOLOv5 output to detection results.

Parameters:

Name Type Description Default
raw tuple[Tensor, ...]

Raw output tuple from model inference.

required

Returns:

Type Description
list[DetectionOutput]

DetectionOutput list with detected bounding boxes and class info.

Source code in inferflow/asyncio/pipeline/detection/torch.py
async def postprocess(self, raw: tuple[torch.Tensor, ...]) -> list[DetectionOutput]:
    """Postprocess YOLOv5 output to detection results.

    Args:
        raw: Raw output tuple from model inference.

    Returns:
        DetectionOutput list with detected bounding boxes and class info.
    """
    predictions = raw[0]
    return self._postprocess_detections(predictions)