Andrew Davison (@ajddavison) 's Twitter Profile
Andrew Davison

@ajddavison

From SLAM to Spatial AI; Professor of Robot Vision, Imperial College London; Director of the Dyson Robotics Lab; Co-Founder of Slamcore. FREng, FRS.

ID: 1446792746

linkhttp://www.doc.ic.ac.uk/~ajd/ calendar_today21-05-2013 16:40:29

3,3K Tweet

18,18K Followers

2,2K Following

Taylor Ogan (@taylorogan) 's Twitter Profile Photo

Another DeepSeek moment. This is the world’s first actual smart phone. It’s an engineering prototype of ZTE’s Nubia M153 running ByteDance’s Doubao AI agent fused into Android at the OS level. It has complete control over the phone. It can see the UI, choose/download apps,

Andrew Davison (@ajddavison) 's Twitter Profile Photo

Congratulations to Xin Kong who passed his PhD viva today! Thanks to his examiners Christian Rupprecht and Tolga Birdal. EscherNet was his important paper in generative 3D modelling, introducing Camera Positional Encoding (CaPe) to allow any number of input and output frames.

Andrew Davison (@ajddavison) 's Twitter Profile Photo

"Whatever you say about AIs you also say to them." From The Dark Forest Theory of the Internet, inspired by the Three Body Problem sci-fi books.

Stephen James (@stepjamuk) 's Twitter Profile Photo

Thanks once again to Nima Gard for joining us for our inaugural episode of Robot Learning In Industry for Neuracore. Make sure to head over to our YouTube channel to watch the full video with Path Robotics: youtu.be/caqlk8LCK7Q?si…

Andrew Davison (@ajddavison) 's Twitter Profile Photo

A new type of neural SLAM: ACE-SLAM builds super-efficient Scene Coordinate Regression maps of scenes in real-time from an RGB-D stream, enabling always-on relocalisation. Inspired by ACE from Eric Brachmann et al. from Niantic Spatial. Dyson Robotics Lab at Imperial College.

Andrew Davison (@ajddavison) 's Twitter Profile Photo

In 4DPM, we segment first, then locally reconstruct and track across many frames in 3D, inferring which objects are moving together. We can create an X-ray replay view of the objects inside a drawer after it's closed. Dyson Robotics Lab at Imperial College London Imperial Computing

Andrew Davison (@ajddavison) 's Twitter Profile Photo

Using powerful multi-view 3D vision transformer models like π³ and Depth Anything 3 for 30FPS real-time tracking of objects and scenes via KV caching. Dyson Robotics Lab at Imperial College London.

Hide (@hidematsu82) 's Twitter Profile Photo

Our new work UNITE predicts 3D-consistent semantic, instance, and affordance features to enable diverse downstream tasks. No multi-view fusion, lifting, or per-scene optimization of 2D features. Just a single feed-forward prediction for native 3D consistency! Amazing work by