The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we introduce CamSLAM, a simultaneous localization and mapping (SLAM) framework composed of a powerful visualinertial odometry backbone using an error-state Extended Kalman Filter (EKF) for sensor fusion, and a very efficient and lightweight parallel mapping engine utilizing keyframe based pose graph data structure and binary descriptors for feature matching and indexing. The framework...
This paper presents a vehicle navigation system that is capable of achieving sub-meter GPS-denied navigation accuracy in large-scale urban environments, using pre-mapped visual landmarks. Our navigation system tightly couples IMU data with local feature track measurements, and fuses each observation of a pre-mapped visual landmark as a single global measurement. This approach propagates precise 3D...
In this paper, we propose a new multiple sensing agent based scheme for an automated cameraman. It is capable of 1) constantly monitoring the visual events in a global surrounding, 2) dynamically, based on the detected visual events, determining the monitoring strategy. These heterogeneous agents are coupled in a unique way to work not only asynchronously but also collaboratively via a facilitator...
In this paper we present a dual, wide area, collaborative augmented reality (AR) system that consists of standard live view augmentation, e.g., from helmet, and zoomed-in view augmentation, e.g., from binoculars. The proposed advanced scouting capability allows long range high precision augmentation of live unaided and zoomed-in imagery with aerial and terrain based synthetic objects, vehicles, people...
In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based...
This paper introduces a user-worn Augmented Reality (AR) based first-person weapon shooting system (AR-Weapon), suitable for both training and gaming. Different from existing AR-based first-person shooting systems, AR-Weapon does not use fiducial markers placed in the scene for tracking. Instead it uses natural scene features observed by the tracking camera from the live view of the world. The AR-Weapon...
AR-Mentor is a wearable real time Augmented Reality (AR) mentoring system that is configured to assist in maintenance and repair tasks of complex machinery, such as vehicles, appliances, and industrial machinery. The system combines a wearable Optical-See-Through (OST) display device with high precision 6-Degree-Of-Freedom (DOF) pose tracking and a virtual personal assistant (VPA) with natural language,...
In this paper, we expand our previous work on augmented reality (AR) binoculars to support wider range of user motion — up to thousand square meters compared to only a few square meters as before. We present our latest improvements and additions to our pose estimation pipeline and demonstrate stable registration of objects on the real world scenery while the binoculars are undergoing significant amount...
This paper proposes a novel vision-aided navigation approach that continuously estimates precise 3D absolute pose for aerial vehicles, using only inertial measurements and monocular camera observations. Our approach is able to provide accurate navigation solutions under long-term GPS outage, by tightly incorporating absolute geo-registered information into two kinds of visual measurements: 2D–3D tie-points,...
This paper proposes a real-time navigation approach that is able to integrate many sensor types while fulfilling performance needs and system constraints. Our approach uses a plug-and-play factor graph framework, which extends factor graph formulation to encode sensor measurements with different frequencies, latencies, and noise distributions. It provides a flexible foundation for plug-and-play sensing,...
In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based...
This paper proposes a navigation algorithm that provides a low-latency solution while estimating the full nonlinear navigation state. Our approach uses Sliding-Window Factor Graphs, which extend existing incremental smoothing methods to operate on the subset of measurements and states that exist inside a sliding time window. We split the estimation into a fast short-term smoother, a slower but fully...
Camera tracking system for augmented reality applications that can operate both indoors and outdoors is described. The system uses a monocular camera, a MEMS-type inertial measurement unit (IMU) with 3-axis gyroscopes and accelerometers, and GPS unit to accurately and robustly track the camera motion in 6 degrees of freedom (with correct scale) in arbitrary indoor or outdoor scenes. IMU and camera...
In this paper, we present a system for detecting pedestrians at long ranges using a combination of stereo-based detection, classification using deep learning, and a cascade of specialized classifiers that can reduce false positives and computational load. Specifically, we use stereo to perform detection of vertical structures which are further filtered based on edge responses. A convolutional neural...
We present a low-cost navigation system that integrates 3D data in realtime, allowing exploration and mapping of complex terrain and GPS-denied regions with an inexpensive sensor package. Precise integration of 3D data from sensors on a moving platform requires accurate bearing and position estimates, delivered at high frequency. This bar has only been met using expensive, high-end IMUs and/or differential...
Stereo Vision processing is a critical component of Augmented Reality systems that rely on the precise depth map of a scene to properly place computer generated objects with real life video. Important aspects of the stereo processing are the creation of a dense depth map, high boundary precision, low latency and low power. We present an embedded system for Stereo Vision Processing based on a custom...
We present an augmented reality system based on Kinect for on-line handbag shopping. The users can virtually try on different handbags on a TV screen at home. They can interact with the virtual handbags naturally, such as sliding a handbag to different positions on their arms and rotating a handbag to see it from different angles. The users can also see how the handbags fit them in different virtual...
Odometry component of a camera tracking system for augmented reality applications is described. The system uses a MEMS-type inertial measurement unit (IMU) with 3-axis gyroscopes and accelerometers and a monocular camera to accurately and robustly track the camera motion in 6 degrees of freedom (with correct scale) in arbitrary indoor or outdoor scenes. Tight coupling of IMU and camera is achieved...
Odometry component of a camera tracking system for augmented reality applications is described. The system uses a MEMS-type inertial measurement unit (IMU) with 3-axis gyroscopes and accelerometers and a monocular camera to accurately and robustly track the camera motion in 6 degrees of freedom (with correct scale) in arbitrary indoor or outdoor scenes. Tight coupling of IMU and camera is achieved...
We present a novel computationally efficient approach to obstacle detection that is applicable to both structured (e.g. indoor, road) and unstructured (e.g. off-road, grassy terrain) environments. In contrast to previous works that attempt to explicitly identify obstacles, we explicitly detect scene regions that are traversable - safe for the robot to go to - from its current position. Traversability...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.