## Abstract

This paper presents the visual navigation method for determining the position and orientation of a ground robot using a diffusion map of robot images (obtained from a camera in an upper position—e.g., tower, drone) and for investigating robot stability with respect to desirable paths and control with time delay. The time delay appears because of image processing for visual navigation. We consider a diffusion map as a possible alternative to the currently popular deep learning, comparing the possibilities of these two methods for visual navigation of ground robots. The diffusion map projects an image (described by a point in multidimensional space) to a low-dimensional manifold preserving the mutual relationships between the data. We find the ground robot’s position and orientation as a function of coordinates of the robot image on the low-dimensional manifold obtained from the diffusion map. We compare these coordinates with coordinates obtained from deep learning. The algorithm has higher accuracy and is not sensitive to changes in lighting, the appearance of external moving objects, and other phenomena. However, the diffusion map needs a larger calculation time than deep learning. We consider possible future steps for reducing this calculation time.

Original language | English |
---|---|

Article number | 2175 |

Pages (from-to) | 1-16 |

Number of pages | 16 |

Journal | Mathematics |

Volume | 8 |

Issue number | 12 |

DOIs | |

State | Published - Dec 2020 |

## Keywords

- Airborne control
- Artificial neural network
- Autopilot
- Deep learning convolution network
- Diffusion map
- Ground robots
- Prototype
- Stability of differential equations
- Tethered platform
- Time delay
- Vision-based navigation
- Vision-based navigation
- Visual navigation