Xucong Zhang (@xucong_zhang) 's Twitter Profile
Xucong Zhang

@xucong_zhang

Assistant professor in Computer Vision Lab, TU Delft.
Formerly postdoc at ETH Zurich, PhD at MPI-INF.
My opinions are my own.

ID: 887331303452016641

linkhttps://www.ccmitss.com/zhang calendar_today18-07-2017 15:20:43

100 Tweet

262 Followers

97 Following

Claudia Hauff 🇪🇺 🇺🇦 🇩🇪 🇳🇱 (@charlottehase) 's Twitter Profile Photo

Such a small but important step. It costs the university exactly nothing to make the Assistant/Associate Professors feel good about having done the lion share of the PhD supervision.

Claudia Hauff 🇪🇺 🇺🇦 🇩🇪 🇳🇱 (@charlottehase) 's Twitter Profile Photo

There is another aspect to this: let assistant and associate professors sign the phd diploma. At TU Delft only promotors sign, co-promotors do not. Another gesture that makes you feel as if your work doesn't count.

Intelligent Systems (@mpi_is) 's Twitter Profile Photo

We are hiring, looking for outstanding candidates in the field of intelligent systems to establish up to two new W2 independent #research groups at our institute's #Stuttgart site: #Robotics, Human-Machine Interaction, Robotics for #Healthcare, #AI etc. is.mpg.de/jobs/research-…

Xucong Zhang (@xucong_zhang) 's Twitter Profile Photo

We are hiring!!! Together with @jan_gemert, we are recruiting a PhD student at TU Delft to work on "Human Behavior Recognition and Generation": tudelft.nl/over-tu-delft/…

Otmar Hilliges (@ohilliges) 's Twitter Profile Photo

In 2022, I finished 2 half marathons, 2 MTB races, and my computer vision research group was firing on all cylinders. In 2023 our lives were turned upside down by severe #LongCovid and #MECFS. 1/

In 2022, I finished 2 half marathons, 2 MTB races, and my computer vision research group was firing on all cylinders. In 2023 our lives were turned upside down by severe #LongCovid and #MECFS.

1/
Jim Fan (@drjimfan) 's Twitter Profile Photo

If you think OpenAI Sora is a creative toy like DALLE, ... think again. Sora is a data-driven physics engine. It is a simulation of many worlds, real or fantastical. The simulator learns intricate rendering, "intuitive" physics, long-horizon reasoning, and semantic grounding, all

Nezihe Merve Gürel (nmervegurel.bsky.social) (@nmervegurel) 's Twitter Profile Photo

We are excited to announce an opening for an associate professor position in machine learning at @tudelf! This opportunity is within the Pattern Recognition Lab at the TU Delft EEMCS. More details below: tudelft.nl/over-tu-delft/…

Michael Black (@michael_j_black) 's Twitter Profile Photo

It is easy to think “LLMs can’t possibly learn X from just text”. But I think this is a very human view. We have limited ability to really understand scale. I can imagine reading a book a week. If I do that for my whole life, I’ll read fewer than 5000 books. What does it mean for

ELLIS (@ellisforeurope) 's Twitter Profile Photo

Many of our #ELLISUnits frequently organize lectures by top #AI researchers. Check out this overview and the upcoming talks at CambridgeEllisUnit & ELLIS Unit Stuttgart! ➡️ellis.eu/lecture-series

Zipeng Fu (@zipengfu) 's Twitter Profile Photo

To retarget from humans to humanoids, we copy the corresponding Euler angles from SMPL-X to our humanoid model. We use open-sourced SOTA human pose and hand estimation methods (thanks!) - WHAM for body: wham.is.tue.mpg.de - HaMeR for hands: geopavlakos.github.io/hamer/

To retarget from humans to humanoids, we copy the corresponding Euler angles from SMPL-X to our humanoid model.

We use open-sourced SOTA human pose and hand estimation methods (thanks!)
- WHAM for body: wham.is.tue.mpg.de
- HaMeR for hands: geopavlakos.github.io/hamer/
Michael Black (@michael_j_black) 's Twitter Profile Photo

Nice use of #WHAM for 3D human motion estimation. It's critical for human-robot interaction to compute human movement in the world coordinate system. It also has to run in real time. Until now, most methods have focused on humans in camera coordinates.

Zijian Dong (@dong_zijian) 's Twitter Profile Photo

I am excited to share our recent research at #ECCV2024: AvatarPose: Avatar-guided 3D Pose Estimation of Close Human Interaction from Sparse Multi-view Videos. We'll present it on Thursday afternoon (session 6, id: 277). Project page and code: eth-ait.github.io/AvatarPose/

Sammy Joe Christen (@sammy_j_c) 's Twitter Profile Photo

It only takes 3 pictures from your cell phone to reconstruct an expressive face model with CAFCA. Impressively, the model was trained purely on synthetic data!