The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The recognition of places by using visual information in underwater environments is important when performing autonomous robotic exploration of the same area at different periods of time. It helps the robot to know its location and take decisions accordingly. However, vision-based recognition of underwater places can be a very challenging task due to the inherent properties of this kind of places...
In this paper, we present a method for vision-based place recognition in environments with a high content of similar features and that are prone to variations in illumination. The high similarity of features makes difficult the disambiguation between two different places. The novelty of our method relies on using the Bag of Words (BoW) approach to derive an image descriptor from a set of relevant...
Vision-based place recognition in underwater environments is a key component for autonomous robotic exploration. However, this task can be very challenging due to the inherent properties of this kind of places such as: color distortion, poor visibility, perceptual aliasing and dynamic illumination. In this paper, we present a method for vision-based place recognition in coral reefs. Our method relies...
We present a behavioral approach for autonomous robotic exploration of marine habitat with collision avoidance given little or no prior information. In our previous work, a vision-based reactive navigation paradigm with a predefined forward direction allowed an underwater robot to avoid unexpected obstacles. In this work, we have now incorporated visual perceptive invariants to guide the navigation...
Visual-based autonomous robotic exploration of unstructured and highly dynamic environments is a complex task. We present an approach to carry out an attention-driven exploration of underwater environments. This work is aimed to grant autonomy to an exploring agent in terms of deciding where to move in function of relevant visual information. This way we could obtain close video-observations of regions...
We present a visual based approach for reactive autonomous navigation of an underwater vehicle. In particular, we are interested in the exploration and continuous monitoring of coral reefs in order to diagnose disease or physical damage. An autonomous underwater vehicle needs to decide in real time the best route while avoiding collisions with fragile marine life and structure. We have opted to use...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.