Lingni Ma (@lingnima) 's Twitter Profile
Lingni Ma

@lingnima

Research scientist @ Meta Reality Labs

ID: 1537510084644786176

calendar_today16-06-2022 18:59:35

22 Tweet

150 Followers

92 Following

Lingni Ma (@lingnima) 's Twitter Profile Photo

Check out our #CVPR2022 paper "LISA: Learning Implicit Shape and Appearance of Hands". We proposed a do-it-all neural hand model, which captures accurate hand shape, pose, appearance, correspondences and generalize to arbitrary new subjects! project page: iri.upc.edu/people/ecorona…

Lingni Ma (@lingnima) 's Twitter Profile Photo

"Desktop Activities" is public as part of #projectaria pilot dataset! Hope this synchronized egocentric vision and multi-view mocap dataset can help the community to work on human-objects understanding 💪 Ground truth annotation will soon be available! facebookresearch.github.io/Aria_data_tool…

Frank Dellaert (@fdellaert) 's Twitter Profile Photo

Andrew Marmon and I rounded up all #CVPR2022 papers on NeRF/Neural Radiance Fields we could find in a new blog post here: dellaert.github.io/NeRF22/

Ben Mildenhall (@benmildenhall) 's Twitter Profile Photo

If you're still at CVPR and have the stamina to make it through another poster session, check out RawNeRF tomorrow morning! We exploit the fact that NeRF is surprisingly robust to image noise to reconstruct scenes directly from raw HDR sensor data.

Michael Black (@michael_j_black) 's Twitter Profile Photo

For #CVPR2023, we have a nice little magic trick. MIME takes 3D human motion capture and generates plausible 3D scenes that are consistent with the motion. Why? Most mocap sessions capture the person but not the scene.

Adam W. Harley (@adamwharley) 's Twitter Profile Photo

Excellent new fine-grained tracking from DeepMind: TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement arxiv: arxiv.org/abs/2306.08637 project: deepmind-tapir.github.io tldr: TapNet for localization then PIPs-style refinement; outperforms everything!

AK (@_akhaliq) 's Twitter Profile Photo

MotionGPT: Finetuned LLMs are General-Purpose Motion Generators paper page: huggingface.co/papers/2306.10… Generating realistic human motion from given action descriptions has experienced significant advancements because of the emerging requirement of digital humans. While recent

AK (@_akhaliq) 's Twitter Profile Photo

MotionGPT: Human Motion as a Foreign Language paper page: huggingface.co/papers/2306.14… Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and

Lingni Ma (@lingnima) 's Twitter Profile Photo

Happy new year everyone! Super excited to announce that our workshop proposal #EgoMotion is accepted #CVPR2024! This will be the 1st workshop on egocentric human motion tracking, synthesis and action recognition. Please stay tuned for more updates & see you all in Seattle!

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Released by Reality Labs at Meta Research at #ECCV2024, Nymeria is a large-scale multimodal egocentric dataset for full-body motion understanding with potential applications in VR/MR headsets, smart glasses and more. More on this work + access to the dataset ➡️ go.fb.me/0znnvb

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Introducing Aria Gen 2, next generation glasses that we hope will enable researchers from industry and academia to unlock new work in machine perception, contextual AI, robotics and more. Aria Gen 2 details + sign up for availability updates ➡️ go.fb.me/8rku3b