Wilko Schwarting (@wilkoschwarting) 's Twitter Profile
Wilko Schwarting

@wilkoschwarting

Robotics & AI @ Symbotic | CS PhD @MIT

ID: 1324453830667501569

calendar_today05-11-2020 20:50:08

104 Tweet

117 Followers

296 Following

Modular (@modular) 's Twitter Profile Photo

👩‍💻 We're excited to announce that we've open sourced the Mojo 🔥standard library! 📚 Building Mojo🔥 in the open will lead to a better result and open sourcing the standard library is our next step in the journey. 🚀 We're also dropping MAX 24.2 today! modular.com/blog/the-next-…

Danijar Hafner (@danijarh) 's Twitter Profile Photo

🌎 Excited to share a major update of the DreamerV3 agent! A couple of smaller changes, more benchmarks, and substantially improved performance. 👇 Main differences from our earlier preprint:

🌎 Excited to share a major update of the DreamerV3 agent!

A couple of smaller changes, more benchmarks, and substantially improved performance.

👇 Main differences from our earlier preprint:
Yann LeCun (@ylecun) 's Twitter Profile Photo

It is of paramount importance that the management of a research lab be composed of reputable scientists. Their main jobs are to: 1. Identify, recruit, and retain brilliant and creative people. 2. Give them the environment, resources, and freedom to do their best work. 3.

Igor Gilitschenski (@igilitschenski) 's Twitter Profile Photo

Excited to share our #CVPR2024 work (oral & award candidate) on integrating map uncertainty into trajectory prediction led by Xunjiang Gu. Our key insight is simple: Scene uncertainties matter for agent trajectories. 🧵1/n

Excited to share our #CVPR2024 work (oral &amp; award candidate) on integrating map uncertainty into trajectory prediction led by <a href="/XunjiangGu/">Xunjiang Gu</a>.

Our key insight is simple: Scene uncertainties matter for agent trajectories. 🧵1/n
Maurice Weiler (@maurice_weiler) 's Twitter Profile Photo

Convolutional neural nets going to spacetime 🚀 Our new ICML24 paper shows how to build Lorentz-equivariant CNNs/MPNNs for multivector fields on Minkowski spaces. This is useful for particle physics or Navier Stokes / electrodynamics simulations. arxiv.org/abs/2402.14730 🧵1/N

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features Letian Wang, Seung Wook Kim, Jiawei Yang, Cunjun Yu, Boris Ivanovic Steven L. Waslander, Yue Wang, Sanja Fidler, Marco Pavone, Peter Karkus

DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features

Letian Wang, <a href="/seungkim0123/">Seung Wook Kim</a>, <a href="/JiaweiYang118/">Jiawei Yang</a>, Cunjun Yu, <a href="/iamborisi/">Boris Ivanovic</a> Steven L. Waslander, <a href="/yuewang314/">Yue Wang</a>, <a href="/FidlerSanja/">Sanja Fidler</a>, <a href="/drmapavone/">Marco Pavone</a>, Peter Karkus
Silvia Sapora (@silviasapora) 's Twitter Profile Photo

1 / 🧵 Excited to introduce our #ICML2024 paper: 😈 EvIL (Evolution Strategies for Generalisable Imitation Learning) a new inverse RL (IRL) method for sample efficient transfer of expert behaviour across environments – it's so good, it's downright EvIL!

1 / 🧵 Excited to introduce our #ICML2024 paper: 😈 EvIL (Evolution Strategies for Generalisable Imitation Learning) a new inverse RL (IRL) method for sample efficient transfer of expert behaviour across environments – it's so good, it's downright EvIL!
Owain Evans (@owainevans_uk) 's Twitter Profile Photo

New paper, surprising result: We finetune an LLM on just (x,y) pairs from an unknown function f. Remarkably, the LLM can: a) Define f in code b) Invert f c) Compose f —without in-context examples or chain-of-thought. So reasoning occurs non-transparently in weights/activations!

New paper, surprising result:
We finetune an LLM on just (x,y) pairs from an unknown function f. Remarkably, the LLM can:
a) Define f in code
b) Invert f
c) Compose f
—without in-context examples or chain-of-thought.
So reasoning occurs non-transparently in weights/activations!
Jon Barron (@jon_barron) 's Twitter Profile Photo

The legendary Ross Girshick just posted his CVPR workshop slides about the 1.5 decades he spent ~solving object detection as it relates to the ongoing LLM singularity. Excellent read, highly recommended. drive.google.com/file/d/1VodGlj…

Minyang Tian (@minyangtian1) 's Twitter Profile Photo

SciCode is our new benchmark that challenges LMs to code solutions for scientific problems from advanced papers. The challenges were crafted by PhDs; ~10% of our benchmark is based on Nobel-winning research. GPT-4 and Sonnet 3.5 get <5% ACC. scicode-bench.github.io 🧵 1/6

SciCode is our new benchmark that challenges LMs to code solutions for scientific problems from advanced papers. The challenges were crafted by PhDs;

 ~10% of our benchmark is based on Nobel-winning research.

GPT-4 and Sonnet 3.5 get &lt;5% ACC.

scicode-bench.github.io  🧵 1/6
Markus Wulfmeier (@m_wulfmeier) 's Twitter Profile Photo

.L4DC Conference is happening University of Oxford this week! Two fantastic papers coming up from our group Google DeepMind in collaboration with Massachusetts Institute of Technology (MIT)! First Tim Seyde continues the path of 'Q-learning is all you need' even for continuous control with 'Growing Q-Networks'

Melissa Chen (@msmelchen) 's Twitter Profile Photo

The only thing that can save bad architecture and soul-draining urban landscapes made of asphalt and concrete is greenery. Sustainability issues aside, trees and plants instantly improve the aesthetics of any city. Singapore pays property owners up to 50% of the cost of

The only thing that can save bad architecture and soul-draining urban landscapes made of asphalt and concrete is greenery. 

Sustainability issues aside, trees and plants instantly improve the aesthetics of any city. 

Singapore pays property owners up to 50% of the cost of
Nous Research (@nousresearch) 's Twitter Profile Photo

What if you could use all the computing power in the world to train a shared, open source AI model? Preliminary report: github.com/NousResearch/D… Nous Research is proud to release a preliminary report on DisTrO (Distributed Training Over-the-Internet) a family of

What if you could use all the computing power in the world to train a shared, open source AI model?

Preliminary report: github.com/NousResearch/D…

Nous Research is proud to release a preliminary report on DisTrO (Distributed Training Over-the-Internet) a family of
Igor Gilitschenski (@igilitschenski) 's Twitter Profile Photo

I'm recruiting graduate students for Fall 2025 to work at the intersection of Computer Vision, Deep Learning, and Robotics. If you are interested in building a controllable organic simulation engine and enabling safe robot learning, consider applying to UofT's CS PhD program 1/n

I'm recruiting graduate students for Fall 2025 to work at the intersection of Computer Vision, Deep Learning, and Robotics.

If you are interested in building a controllable organic simulation engine and enabling safe robot learning, consider applying to UofT's CS PhD program 1/n
Riku Murai (@rmurai0610) 's Twitter Profile Photo

Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation. Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map. With Eric Dexheimer*, Andrew Davison (*Equal Contribution)

Jianyuan Wang (@jianyuan_wang) 's Twitter Profile Photo

Introducing VGGT (CVPR'25), a feedforward Transformer that directly infers all key 3D attributes from one, a few, or hundreds of images, in seconds! No expensive optimization needed, yet delivers SOTA results for: ✅ Camera Pose Estimation ✅ Multi-view Depth Estimation ✅ Dense

Natasha Jaques (@natashajaques) 's Twitter Profile Photo

In our latest paper, we discovered a surprising result: training LLMs with self-play reinforcement learning on zero-sum games (like poker) significantly improves performance on math and reasoning benchmarks, zero-shot. Whaaat? How does this work? We analyze the results and find

Tim Rocktäschel (@_rockt) 's Twitter Profile Photo

Harder, Better, Faster, Stronger, Real-time! We are excited to reveal Genie 3, our most capable real-time foundational world model. Fantastic cross-team effort led by Jack Parker-Holder and Shlomi Fruchter. Below some interactive worlds and capabilities that were highlights for me