Katrin Renz (@katrinrenz) 's Twitter Profile
Katrin Renz

@katrinrenz

LLMs + Autonomous Driving.
PhD Student with @andreasgeiger0 | Previously at @wayve_ai @Oxford_VGG

ID: 1406661402798985217

linkhttp://Katrinrenz.de calendar_today20-06-2021 17:14:24

60 Tweet

496 Followers

196 Following

Intelligent Systems (@mpi_is) 's Twitter Profile Photo

Just 10 days to go! Join our elite IMPRS-IS doctoral program - a partnership with MPI-IS, @uni_stuttgart & Universität Tübingen! Apply here: imprs.is.mpg.de/applicationApp… Deadline to apply: November 15, 2024

Just 10 days to go! Join our elite IMPRS-IS doctoral program - a partnership with MPI-IS, @uni_stuttgart &amp; <a href="/uni_tue/">Universität Tübingen</a>! Apply here: imprs.is.mpg.de/applicationApp… Deadline to apply: November 15, 2024
Katrin Renz (@katrinrenz) 's Twitter Profile Photo

We have just released a new tool to create custom routes and insert scenarios for the CARLA Leaderboard 2.0. The tool was written by our great research assistant Jens Beißwenger 🥳 Github: github.com/autonomousvisi… #CARLA #AutonomousDriving

We have just released a new tool to create custom routes and insert scenarios for the CARLA Leaderboard 2.0. The tool was written by our great research assistant <a href="/jens_beiss/">Jens Beißwenger</a> 🥳

Github: github.com/autonomousvisi…

#CARLA #AutonomousDriving
Katrin Renz (@katrinrenz) 's Twitter Profile Photo

In my first research project I was super excited about getting any stars on GitHub. Now having a project with 1k stars feels unreal🤯 wouldn’t have been possible without the tremendous effort of Chonghao Sima during the main project and afterwards with the challenge 🙏🏼

Wayve (@wayve_ai) 's Twitter Profile Photo

Introducing GAIA-2 🌎Generative world modeling just stepped up a gear. GAIA-2 is the latest development of Wayve’s video-generative world model tailored for driving. GAIA-2 offers richer, more realistic, and highly controllable synthetic driving scenarios, accelerating Wayve’s

Jamie Shotton (@jamie_shotton) 's Twitter Profile Photo

1 year since we launched LINGO-2 Wayve 🧠 With LINGO-2, our AI is trained to both make decisions *and* communicate them. The first closed-loop vision-language-action driving model (VLAM) tested on public roads, LINGO-2 has been game-changing for exploring the connection

Physical Intelligence (@physical_int) 's Twitter Profile Photo

We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️

Bernhard Jaeger (@bern_jaeger) 's Twitter Profile Photo

Introducing CaRL: Learning Scalable Planning Policies with Simple Rewards We show how simple rewards enable scaling up PPO for planning. CaRL outperforms prior learning-based approaches on nuPlan Val14 and CARLA longest6 v2, using less inference compute. arxiv.org/abs/2504.17838

Introducing CaRL: Learning Scalable Planning Policies with Simple Rewards
We show how simple rewards enable scaling up PPO for planning.
CaRL outperforms prior learning-based approaches on nuPlan Val14 and CARLA longest6 v2, using less inference compute.
arxiv.org/abs/2504.17838
Christian Richardt (@c_richardt) 's Twitter Profile Photo

📢 New paper #CVPR2025! Can meshes capture fuzzy geometry? Volumetric Surfaces uses adaptive textured shells to model hair + fur. It’s fast, looks great, and runs in real time even on budget phones. 🔗 autonomousvision.github.io/volsurfs/ 📄 arxiv.org/pdf/2409.02482

Chonghao Sima (@smch_1127) 's Twitter Profile Photo

Just happened to know that DriveLM ranked #9 on the Most Influential ECCV Papers (2024-09 Version). Thorough benchmarking on driving with VLM gains it popularity! paperdigest.org/2024/09/most-i…

Wayve (@wayve_ai) 's Twitter Profile Photo

Only a few days to go until #CVPR2025 kicks off 🤩 This year, we’re excited to share our research paper #SIMLINGO — a foundation model that brings together vision, language, and action to power more generalizable, interpretable embodied agents. 🚗🗣️👀 Come find us at booth 1429

Katrin Renz (@katrinrenz) 's Twitter Profile Photo

📢Excited to present our poster "SimLingo" tomorrow at #CVPR2025. Drop by to talk about vision-language-action models, language-action grounding, or anything else :) 📍Saturday, 10:30 - 12:30 Poster #130 Joint work with Long Chen Elahe Arani Oleg Sinavski Wayve

📢Excited to present our poster "SimLingo" tomorrow at #CVPR2025. Drop by to talk about vision-language-action models, language-action grounding, or anything else :)

📍Saturday, 10:30 - 12:30 Poster #130

Joint work with Long Chen <a href="/ElaheArani/">Elahe Arani</a> <a href="/SinavskiOleg/">Oleg Sinavski</a> <a href="/wayve_ai/">Wayve</a>
Sergio Paniego (@sergiopaniego) 's Twitter Profile Photo

#CVPR2025 Paper Picks #3 🚗 SimLingo: Vision-Language-Action for autonomous driving by Katrin Renz et al. Wayve Autonomous driving meets language grounding. SimLingo drives and understands — using only cameras. No LiDAR. No diffusion. Just vision, language, and action.

#CVPR2025 Paper Picks #3

🚗 SimLingo: Vision-Language-Action for autonomous driving  by <a href="/KatrinRenz/">Katrin Renz</a>  et al. <a href="/wayve_ai/">Wayve</a>

Autonomous driving meets language grounding.
SimLingo drives and understands — using only cameras.

No LiDAR. No diffusion. Just vision, language, and action.