philip (@philiptorr) 's Twitter Profile
philip

@philiptorr

Professor Oxford

ID: 27057939

calendar_today27-03-2009 17:59:14

71 Tweet

367 Followers

89 Following

philip (@philiptorr) 's Twitter Profile Photo

uper happy to be one of the organizers of this, eurips.cc, now an option to officially present your papers in Europe and save greenhouse gas!!!!! Please repost and spread the word!!! Please repost!!!!

Georgia Channing (@cgeorgiaw) 's Twitter Profile Photo

very proud that my work on multi-agent debate for misinformation detection won best paper award at the ICML Conference CFAgentic workshop! check it out on arxiv: arxiv.org/abs/2410.20140 v grateful to all my co-authors and the support from BBC Research & Development 🥳

very proud that my work on multi-agent debate for misinformation detection won best paper award at the <a href="/icmlconf/">ICML Conference</a> CFAgentic workshop! 

check it out on arxiv: arxiv.org/abs/2410.20140

v grateful to all my co-authors and the support from <a href="/BBCRD/">BBC Research & Development</a> 🥳
Guohao Li (Hiring!) 🐫 (@guohao_li) 's Twitter Profile Photo

🚨 [Call for Papers] SEA Workshop @ NeurIPS 2025 🚨 📅 December 6, 2025 | 📍 San Diego, USA 🌐: sea-workshop.github.io Environments are the "data" for training agents, which is largely missing in the open source ecosystem. We are hosting Scaling Environments for Agents (SEA)

🚨 [Call for Papers] SEA Workshop @ NeurIPS 2025 🚨
📅 December 6, 2025 | 📍 San Diego, USA
🌐: sea-workshop.github.io

Environments are the "data" for training agents, which is largely missing in the open source ecosystem.

We are hosting Scaling Environments for Agents (SEA)
Kevin Patrick Murphy (@sirbayes) 's Twitter Profile Photo

Finally, a good modern book on causality for ML: causalai-book.net by Elias Bareinboim. This looks like a worthy successor to the ground breaking book by Judea Pearl which I read in grad school. (h/t Joshua Safyan for the ref).

Zhenfei Yin @ ICLR 2025 (@9ldrohjze56jsh9) 's Twitter Profile Photo

We are excited to announce the MARS Multi-Agent Embodied Intelligence Challenge 🎉, which will be held at the NeurIPS 2025 SpaVLE Workshop! Co-organized by Shanghai Jiao Tong University, University of Oxford, The University of Hong Kong, UC San Diego, and other international

We are excited to announce the MARS Multi-Agent Embodied Intelligence Challenge 🎉, which will be held at the NeurIPS 2025 SpaVLE Workshop!
Co-organized by Shanghai Jiao Tong University, University of Oxford, The University of Hong Kong, UC San Diego, and other international
Serge Belongie (@sergebelongie) 's Twitter Profile Photo

The speed of Everlyn video generation for this level of photorealism/hyperrealism is astonishing. Congratulations to Ser Nam CY Harry Yang and team on this combined research + engineering marvel 👏

naveen manwani (@naveenmanwani17) 's Twitter Profile Photo

🚨 Paper Alert 🚨 ➡️Paper Title: Articulate3D: Zero-Shot Text-Driven 3D Object Posing 🌟Few pointers from the paper 🎯Authors of this paper proposed a training-free method, “Articulate3D”, to pose a 3D asset through language control. 🎯Despite advances in vision and language

Guohao Li (Hiring!) 🐫 (@guohao_li) 's Twitter Profile Photo

Sir, we built this. A RL environment for learning reasoning at scale. GitHub: github.com/camel-ai/loong HF dataset: huggingface.co/datasets/camel… We extracted seed datasets from sources like textbooks, code libraries like sympy, networkX, Gurobi (math programming lib), rdkit

Sir, we built this. A RL environment for learning reasoning at scale.

GitHub: github.com/camel-ai/loong
HF dataset: huggingface.co/datasets/camel…

We extracted seed datasets from sources like textbooks, code libraries like sympy, networkX, Gurobi (math programming lib), rdkit
Junlin (Hans) Han (@han_junlin) 's Twitter Profile Photo

Excited to share our new work: “Learning to See Before Seeing”! 🧠➡️👀 We investigate an interesting phenomeno: how do LLMs, trained only on text, learn about the visual world? Project page: junlinhan.github.io/projects/lsbs/

Excited to share our new work: “Learning to See Before Seeing”! 🧠➡️👀 We investigate an interesting phenomeno: how do LLMs, trained only on text, learn about the visual world? 
Project page:  junlinhan.github.io/projects/lsbs/
James Oldfield (@jamesaoldfield) 's Twitter Profile Photo

Please find many more results on 4 LLMs (across base models, instruction-tuned models, and reasoning models), and ablations in the paper! 📰 Project: james-oldfield.github.io/tpc 💻 Code: github.com/james-oldfield… 📄 Paper: arxiv.org/abs/2509.26238

Filippos Kokkinos (@filippos_kok) 's Twitter Profile Photo

Big congrats to my PhD student Junlin (Hans) Han, the Meta MSL team and philip ! “Learning to See Before Seeing” shines a light on visual priors from language pretraining and offers a practical recipe for vision-aware LLMs.

AISecHub (@aisechub) 's Twitter Profile Photo

Adversarial Manipulation of Tool Selection As LLMs increasingly power agents that interact with external tools, tool use has become an essential mechanism for extending their capabilities. These agents typically select tools from growing databases or marketplaces to solve user

Adversarial Manipulation of Tool Selection

As LLMs increasingly power agents that interact with external tools, tool use has become an essential mechanism for extending their capabilities. These agents typically select tools from growing databases or marketplaces to solve user
Sumeet Motwani (@sumeetrm) 's Twitter Profile Photo

This project would not have been possible without amazing collaborators! Very grateful to my co-leads on the project Alesia Ivanova (who is an MSc student at Oxford!) and Charlie London, along with Jack Cai (Jack Cai), my advisors at Microsoft - Shital Shah (Shital Shah) and

Shital Shah (@sytelus) 's Twitter Profile Photo

This is a joint work by amazing folks from Oxford including Alesia Ivanova, Sumeet Motwani, Charlie London, Jack Cai, Christian Schroeder de Witt, philip and Riashat Islam from Microsoft Research. Arxiv: arxiv.org/abs/2510.07312 Code: github.com/AlesyaIvanova/…