anyloc.github.io - AnyLoc: Towards Universal Visual Place Recognition

Description: AnyLoc: Towards Universal Visual Place Recognition

sam (466) dino (138) vpr (15) foundation models (6) anyloc (1) foundvlad (1) visual place recognition (1) dinov2 (1)

Example domain paragraphs

VPR is vital for robot localization. To date, the most performant VPR approaches are environment- and task-specific: while they exhibit strong performance in structured environments (predominantly urban driving), their performance degrades severely in unstructured environments, rendering most approaches brittle to robust realworld deployment. In this work, we develop a universal solution to VPR – a technique that works across a broad range of structured and unstructured environments (urban, outdoors, indoor

Simply click on any point on the query trajectory (upper left), and observe its best retrieval and similarity heatmap on the database trajectory (upper right). Use the drop-down menu to switch to a different environment. The corresponding query and the best retrieval image is visualized at the lower left and lower right respectively. Points that are identified by &#11088 on the database trajectory indicate the best retrieval. Yellow indicates higher similarity in the database trajectory (upper right).

If you use the source code of this website, please also link back to the Nerfies source code in your footer.

Links to anyloc.github.io (3)