In general, lidar is used to remove the ambiguity in a local ground scan, and cameras extrapolate overlapping texture gradients to guess distant surface structure ( documented in the old book https://www.amazon.com/Learning-OpenCV-Computer-Vision-Libra... .)
There are some fairly good FOSS tools around like COLMAP, if you want to learn why automatic monocular pose recovery and SfM is hard.
Real autonomous robotics is hard, and people make the same predictable mistakes every 4 years. Retrofitting a consumer Yarbo would be cool though. =3
Get the best books from Hacker News each week
Join 4,500+ subscribers and get the best books mentioned on Hacker News every Thursday.
There are some fairly good FOSS tools around like COLMAP, if you want to learn why automatic monocular pose recovery and SfM is hard.
Real autonomous robotics is hard, and people make the same predictable mistakes every 4 years. Retrofitting a consumer Yarbo would be cool though. =3