Suny Shtedritski (@shtedritski) 's Twitter Profile
Suny Shtedritski

@shtedritski

Member of Technical Staff microsoft.ai | Prev: intern @GoogleDeepMind, PhD @Oxford_VGG, lead @OxfordAI, MEng Engineering @oxengsci | +359 | Pump x Hunter

ID: 1312679328216383488

calendar_today04-10-2020 09:02:05

95 Tweet

729 Followers

198 Following

AshutoshShrivastava (@ai_for_success) 's Twitter Profile Photo

This is freaking insane to be honest ๐Ÿ”ฅ SynCity : it generates complex and immersive 3D worlds from text prompts and does not require any training or optimization. It leverages the pretrained 2D image generator Flux and the 3D generator TRELLIS More examples and details below ๐Ÿ‘‡

cedric (@cedric_chee) 's Twitter Profile Photo

Damn cool. SynCity turns text prompts into immersive 3D worlds. It combines Flux (2D artistic diversity) with TRELLIS (3D accuracy) to build seamless, navigable environments tile by tile.

Kye Gomez (swarms) (@kyegomezb) 's Twitter Profile Photo

This is incredible. Infinite world generation is here. Now time to put the agents inside and simulate entire cities and economies.

Alex Cheema - e/acc (@alexocheema) 's Twitter Profile Photo

has there been any other startup residency with $6bn created in <2yrs? and it's in Oxford, not SF. so much latent talent hiding out in the UK.

Tengda Han (@tengdahan) 's Twitter Profile Photo

Humans learn from one continuous visual stream, but large video models have to be trained on billions of web videos. We found that learning from such sequential streams is challenging for video modelsโ€”and we introduce a family of "orthogonal optimizers" to bridge the gap!

Piyush Bagad (@bagad_piyush) 's Twitter Profile Photo

Have you wondered why you can pour water accurately even in dark or in opaque bottles? Turns out the sound of pouring secretly hides physical properties that we implicitly infer. Can we train machines to infer these? Know more at our poster IEEE ICASSP (April 10, 830-10 AM IST)!

Suny Shtedritski (@shtedritski) 's Twitter Profile Photo

Very excited to share that the code for SynCity ๐ŸŒ† is out! Check out github.com/paulengstler/sโ€ฆ For more details about the method, please see the project page research.paulengstler.com/syncity/

Sindhu Hegde (@sindhubhegde) 's Twitter Profile Photo

Introducing JEGAL๐Ÿ‘ JEGAL can match hand gestures with words & phrases in speech/text. By only looking at hand gestures, JEGAL can perform tasks like determining who is speaking, or if a keyword (eg beautiful) is gestured More about our latest research on co-speech gestures ๐Ÿงต๐Ÿ‘‡

Nina Shvetsova (@ninashv__) 's Twitter Profile Photo

๐Ÿš€Excited to announce our #CVPR2025 2025 paper: Unbiasing through Textual Descriptions! We release UTD-descriptions for 1.9M videos and object-debiased splits for 12 datasets! ๐Ÿ”—Project: utd-project.github.io Arsha Nagrani Bernt Schiele Hilde Kuehne Christian Rupprecht ๐Ÿงต๐Ÿ‘‡

๐Ÿš€Excited to announce our <a href="/CVPR/">#CVPR2025</a> 2025 paper:
Unbiasing through Textual Descriptions!
We release UTD-descriptions for 1.9M videos and object-debiased splits for 12 datasets!
๐Ÿ”—Project: utd-project.github.io <a href="/NagraniArsha/">Arsha Nagrani</a> Bernt Schiele <a href="/HildeKuehne/">Hilde Kuehne</a> Christian Rupprecht ๐Ÿงต๐Ÿ‘‡
Nate Gillman @ICLR'25 (@gillmanlab) 's Twitter Profile Photo

Ever wish you could turn your video generator into a controllable physics simulator? We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. ๐Ÿงต(1/n)

Tomas Jakab (@jakabtomas) 's Twitter Profile Photo

We are presenting Dual Point Maps as a #CVPR highlight tomorrow! Learn about our novel, data-efficient representation for 3D/4D deformable objectsโ€”an alternative to classical template shape models. ๐Ÿ“๐Ÿ•‘ ExHall D, Poster #100, afternoon session ๐ŸŒdualpm.github.io

Visual Geometry Group (VGG) (@oxford_vgg) 's Twitter Profile Photo

Many Congratulations to Jianyuan#CVPR2025 2025, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht and David Novotny for winning the Best Paper Award @CVPR for "VGGT: Visual Geometry Grounded Transformer" ๐Ÿฅ‡๐ŸŽ‰ ๐Ÿ™Œ๐Ÿ™Œ #CVPR2025!!!!!!

Many Congratulations to <a href="/jianyuan_wang/">Jianyuan<a href="/CVPR/">#CVPR2025</a> 2025</a>, <a href="/MinghaoChen23/">Minghao Chen</a>, <a href="/n_karaev/">Nikita Karaev</a>, Andrea Vedaldi, Christian Rupprecht and <a href="/davnov134/">David Novotny</a> for winning the Best Paper Award @CVPR for "VGGT: Visual Geometry Grounded Transformer" ๐Ÿฅ‡๐ŸŽ‰ ๐Ÿ™Œ๐Ÿ™Œ #CVPR2025!!!!!!
Jianyuan Wang (@jianyuan_wang) 's Twitter Profile Photo

Thrilled and honored to receive the Best Paper Award at #CVPR2025! Huge thanks to my fantastic collaborators Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, and David Novotny. Could not be there without you!

Thrilled and honored to receive the Best Paper Award at #CVPR2025! Huge thanks to my fantastic collaborators <a href="/MinghaoChen23/">Minghao Chen</a>, <a href="/n_karaev/">Nikita Karaev</a>, Andrea Vedaldi, Christian Rupprecht, and <a href="/davnov134/">David Novotny</a>. Could not be there without you!
Andrei Bursuc (@abursuc) 's Twitter Profile Photo

When life gives you lemons, Andrea makes lemonade ๐Ÿ‹ Kudos to Andrea Vedaldi doing an excellent work presenting his paper in spite of an incident w/ the poster #cvpr2025

When life gives you lemons, Andrea makes lemonade ๐Ÿ‹ 
Kudos to Andrea Vedaldi doing an excellent work presenting his paper in spite of an incident w/ the poster #cvpr2025