Skip Visual Positioning uses natural structure in the robot’s environment to estimate motion. We develop custom visual localization and positioning solutions using our in-house visual odometry library. Running in realtime, our system supports both stereo and monocular applications.
Visual SLAM Technology
Vision based localization systems are bringing the next generation of automation to bigger markets with lower capital costs and higher accuracy. We provide core technology and integration support for our customized technology package that let you focus on building robots your customers need.
The Skip Visual Odometry system operating on a public monocular dataset. The estimated motion is computed in real time and does not make use of loop closure detection, ground plane assumptions, inertial (IMU), nor odometry data.
Most mobile robotics platforms today use Lidar for localization and mapping because it’s simple to use. But it has disadvantages in regard to range and position accuracy.
Modern imaging sensors deliver extremely high resolution and can enable sub millimeter positioning within small workspaces. The passive nature of the sensor means that vision based localization systems can also operate in large workspaces where Lidar range is not enough. All at much lower costs for the underlying hardware.