torch¶
torch ¶
Attributes¶
Classes¶
YOLODetectionMixin ¶
Shared YOLOv5 detection logic.
Attributes:
| Name | Type | Description |
|---|---|---|
image_size | tuple[int, int] | Target image size. |
stride | int | Model stride. |
conf_threshold | float | Confidence threshold for detections. |
iou_threshold | float | IoU threshold for NMS. |
class_names | dict[int, str] | Mapping from class ID to class name. |
YOLOv5DetectionPipeline ¶
YOLOv5DetectionPipeline(runtime: TorchScriptRuntime, image_size: tuple[int, int] = (640, 640), stride: int = 32, conf_threshold: float = 0.25, iou_threshold: float = 0.45, class_names: dict[int, str] | None = None, batch_strategy: BatchStrategy[Tensor, tuple[Tensor, ...]] | None = None)
Bases: YOLODetectionMixin, Pipeline[Tensor, tuple[Tensor, ...], list[DetectionOutput]]
YOLOv5 object detection pipeline (sync version).
Performs
- Image decoding and conversion
- Resizing and normalization
- Model inference
- Bounding box extraction with NMS
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
runtime | TorchScriptRuntime | Inference runtime. | required |
image_size | tuple[int, int] | Target image size (default: (640, 640)). | (640, 640) |
stride | int | Model stride (default: 32). | 32 |
conf_threshold | float | Confidence threshold for detections (default: 0.25). | 0.25 |
iou_threshold | float | IoU threshold for NMS (default: 0.45). | 0.45 |
class_names | dict[int, str] | None | Optional mapping from class ID to class name. | None |
batch_strategy | BatchStrategy[Tensor, tuple[Tensor, ...]] | None | Optional batching strategy. | None |
Example
runtime = TorchScriptRuntime(
model_path="yolov5s.pt", device="cuda"
)
pipeline = YOLOv5DetectionPipeline(
runtime=runtime,
class_names={0: "person", 1: "bicycle", 2: "car"},
)
with pipeline.serve():
results = pipeline(image_bytes)
for det in results:
print(
f"{det.class_name}: {det.confidence:.2%} at {det.box.to_xywh()}"
)
Source code in inferflow/pipeline/detection/torch.py
Attributes¶
Functions¶
preprocess ¶
preprocess(input: ImageInput) -> Tensor
Preprocess image input for YOLOv5.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input | ImageInput | Raw image input (bytes, numpy array, PIL Image, or tensor). | required |
Returns:
| Type | Description |
|---|---|
Tensor | Preprocessed tensor ready for model inference. |
Source code in inferflow/pipeline/detection/torch.py
postprocess ¶
postprocess(raw: tuple[Tensor, ...]) -> list[DetectionOutput]
Postprocess YOLOv5 output to detection results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
raw | tuple[Tensor, ...] | Raw output tuple from model inference. | required |
Returns:
| Type | Description |
|---|---|
list[DetectionOutput] | DetectionOutput list with detected bounding boxes and class info. |