Layer Role

The perception layer turns raw sensor data into structured observations that upper layers can consume directly.

In the current repository, it mainly covers:

  • object detection
  • target tracking
  • auto-aim visual front-end logic
  • general YOLO-style 2D detections

Current Directory Mapping

  • driver/ros2_hik_camera
  • perception/rm_auto_aim
  • perception/yolo_detector

Current Modules

Interface Guidance

New perception modules should preferably follow these rules:

  1. reuse standard image-facing inputs such as /image_raw and /camera_info
  2. keep detection, tracking, and control outputs separated
  3. do not mix control semantics into pure detector messages
  4. generic detectors should stay decoupled from competition-specific task logic
© 2026 Venom Algorithm GitHub