portfolio

Learned Trajectory Annotation

Unsupervised autoencoder learns spatial context from trajectory data for annotation.

This research addresses the challenge of enabling more intuitive human-robot interaction in shared spaces, particularly focusing on grounding verbal communication in spatial understanding. The work introduces a novel unsupervised learning methodology based on neural autoencoders.

Visualization of spatial perception field (e.g., isovist) from a point on a trajectory
Learning spatial context representations from a trajectory.

The core contribution is a system that learns continuous, low-dimensional representations of spatial context directly from trajectory data, without requiring explicit environmental maps or predefined regions. By processing sequences of spatial perceptions (analogous to visibility fields or isovists) along a path, the autoencoder captures salient environmental features relevant to movement.

Clustered or reconstructed trajectories based on learned spatial representations
Clustering and annotation of trajectories based on learned spatial representations.

These learned latent representations facilitate the effective clustering of trajectories based on shared spatial experiences. The outcome is a set of semantically meaningful encodings and prototypical representations of movement patterns within an environment. This approach lays essential groundwork for developing robotic systems capable of understanding, interpreting, and potentially describing movement through space in human-comprehensible terms, representing a promising direction for future human-robot collaboration. (Feld et al., 2018)