Mete Tuluhan Akbulut (@metetuluhan) 's Twitter Profile
Mete Tuluhan Akbulut

@metetuluhan

CS PhD Student at Brown. Member of Intelligent Robot Lab @BrownBigAI Previously: Member of @ColorsLab_BOUN at Bogazici University

ID: 1477748251

calendar_today02-06-2013 16:56:10

103 Tweet

141 Followers

451 Following

Yong Zheng-Xin (Yong) (@yong_zhengxin) 's Twitter Profile Photo

LLMs such as ChatGPT and BLOOMZ claim that they are multilingual, but does this mean they can generate code-mixed data? Follow this 🧵 to find out. (1/N) Paper: arxiv.org/abs/2303.13592

LLMs such as ChatGPT and BLOOMZ claim that they are multilingual, but does this mean they can generate code-mixed data? Follow this 🧵 to find out. (1/N)

Paper: arxiv.org/abs/2303.13592
Akhil Bagaria (@akhil_bagaria) 's Twitter Profile Photo

📢Our paper on exploration for deep RL has been accepted as an Oral ICML Conference! 🎉 Count-based exploration enjoys strong theoretical foundations, yet it has been overshadowed by prediction-error methods like RND. Our embarrassingly simple idea could scale pseudocounts ⬇️ #ICML2023

📢Our paper on exploration for deep RL has been accepted as an Oral <a href="/icmlconf/">ICML Conference</a>! 🎉 Count-based exploration enjoys strong theoretical foundations, yet it has been overshadowed by prediction-error methods like RND. Our embarrassingly simple idea could scale pseudocounts ⬇️ #ICML2023
Ebrar Karakurt (@karakurtebrar18) 's Twitter Profile Photo

BİZ TÜRKİYE'YİZ BİZ ŞAMPİYONUZ! 🇹🇷 Tribünlerde ve ekran başında sizler, sahada biz! Birlikte savaştık ve kazandık!! Bu şampiyonluk güzel ülkemiz için... Avrupa'nın en büyüğü TÜRKİYE! 🐺❤️🏆

BİZ TÜRKİYE'YİZ BİZ ŞAMPİYONUZ! 🇹🇷

Tribünlerde ve ekran başında sizler, sahada biz! Birlikte savaştık ve kazandık!! Bu şampiyonluk güzel ülkemiz için...
Avrupa'nın en büyüğü TÜRKİYE! 🐺❤️🏆
Ahmet Ercan Tekden (@ercantekden) 's Twitter Profile Photo

Happy to share that our paper “Neural Field Movement Primitives for Joint Modelling of Scenes and Motions” has been accepted to #IROS2023. Joint work with Yasemin Bekiroglu and Marc P. Deisenroth. Project Page: fzaero.github.io/NFMP/ Arxiv: arxiv.org/abs/2308.05040 (1/6)

Mete Tuluhan Akbulut (@metetuluhan) 's Twitter Profile Photo

Keşke insanlar Emre Ugur gibi başarılı hocalarımızın nerede yemek yedikleriyle ilgileneceklerine, üniversitelerdeki kısıtlı imkanlara rağmen nasıl yıllardır Dünya çapında araştırma yaptıklarıyla ilgilenseler!

Cam Allen (@camall3n) 's Twitter Profile Photo

RL in POMDPs is hard because you need memory. Remembering *everything* is expensive, and RNNs can only get you so far applied naively. New paper: 🎉 we introduce a theory-backed loss function that greatly improves RNN performance! 🧵 1/n

Jason Liu @CoRL (@jasonxyliu) 's Twitter Profile Photo

How do robots understand natural language? #IJCAI2024 survey paper on robotic language grounding We situated papers into a spectrum w/ two poles, grounding language to symbols and high-dimensional embeddings. We discussed tradeoffs, open problems & exciting future directions!

How do robots understand natural language?

#IJCAI2024 survey paper on robotic language grounding

We situated papers into a spectrum w/ two poles, grounding language to symbols and high-dimensional embeddings. We discussed tradeoffs, open problems &amp; exciting future directions!
Benedict Quartey (@benedict_q) 's Twitter Profile Photo

🚨 What is the best way to use foundation models in robotics? Our new work shows that combining LLMs & VLMs with ideas from formal methods leads to robots that can verifiably follow complex, open-ended instructions in the real world. 🌍 We evaluate on over 150 tasks🚀 🧵 (1/4)

Yunus Şeker (@myunusseker) 's Twitter Profile Photo

🤖🌟New GTA just dropped! Check out our latest paper on zero-shot skill transfer for robots. We show how modular task-axis controllers + visual foundation models enable real-world manipulation — without training or demos. 🌐 iamlab-cmu.github.io/GTA/ 📄 arxiv.org/abs/2505.11680

Sergio Orozco (@sorozco0612) 's Twitter Profile Photo

Can robots learn data-efficient world models for object manipulation? Our new paper, which won "Best Paper" at the Conference on Robot Learning RINO Workshop, shows how robots learn object-centric world models with just a few minutes of interaction data. 🧵1/7

Ahmed Jaafar (@ahmed__jaafar) 's Twitter Profile Photo

Wouldn't it be great if robots were more data efficient? Manipulation has gotten more data efficient, but mobile manipulation (MoMa) is lagging. Introducing LAMBDA (λ): long horizon benchmark of realistically sized datasets to push the limits of MoMa models. 🤖 #IROS2025 🧵1/N

bigAI (@brownbigai) 's Twitter Profile Photo

Why does most of hierarchical RL stop at the first layer of hierarchy—i.e. skills? Because until now, it wasn't clear how to learn the state abstractions in a principled way! Our recent NeurIPS paper shows how to start with a set of skills and learn state abstractions. 🧵1/8