
Notchbit AV 2.0 is a real-time end to end camera based full-stack autonomous vehicle driving AI system.
- Supports RGB and Grayscale fisheye cameras.
- Operates using at least 2 cameras, up to 4 cameras.
- 2D Lidar point cloud is optional for higher safety functions.
- Adopts SIL 3 safety level.
- Designed for AMR robots at low speed up to 30 Km/h.
- Operational for indoor or controlled outdoor environments.
- Operates using pre-installed maps or using online SLAM.
- Supports multi-destination navigation.
- Supports OTA, remote diagnostics, and remote control.
- Supports collision detection with obstacle avoidance.
Currently supported end user functions
- Autonomous driving based on single front fisheye camera.
- Single target navigation with collision detection only.
- Operates for indoor environment using pre-installed map.

Product facts
System requirements
- RTX 3060 GPU or Orin AGX
- RAM Footprint: 4 GB
- ROM Footprint: 1.6 GB
- CPU load: 60% @ quad-core armv8 @ 1.2 GHz
Software requirements
- Linux / QNX
- PyTorch 2.5 C++ libraries
- CUDA 12.0
Notchbit AV 2.0 is provided in several options
- As a software docker image or virtual machine.
- As a standalone ECU.

Product system architecture is based on end to end model
- System components are trained end-to-end.
- The system uses the LLM models concepts to generate driving sequence.
- The system is monitored using a coupled 2D Lidar based system.
Synthetic training system is based on latest autonomous mobile robot simulators
- The system is based on Nvidia IssacSim and Cosmos for realistic data generation.
- Implements Notchbit latest technology reinforcement learning system.
- Utilizes Notchbit latest training scenarios management system, with CI/CD pipeline support.