Oxford Torr Vision Group (@oxfordtvg) 's Twitter Profile
Oxford Torr Vision Group

@oxfordtvg

TVG @UniofOxford; Computer Vision, Machine Learning and latest research for Artificial Intelligence.

ID: 836565725628284928

linkhttp://www.robots.ox.ac.uk/~tvg/ calendar_today28-02-2017 13:16:26

183 Tweet

1,1K Followers

85 Following

Tim Franzmeyer (@frtimlive) 's Twitter Profile Photo

📢 Introducing Illusory Attacks: Information-theoretically undetectable adversarial attacks on RL agents and humans! Spotlight (top 5%) @ ICLR’24 – see you in Vienna! 🔎Can you spot which observation is under attack?🔎 🧵

Adel Bibi (@adel_bibi) 's Twitter Profile Photo

We are presenting 3 papers in the main conference in #ICLR (2 AI Safety papers today and 1 on efficient CL tomorrow). We also have a workshop paper on the universal approximation of prompting on Saturday. Congrats to all student authors Aleks Petrov Tim Franzmeyer and Wenxuan.

We are presenting 3 papers in the main conference in #ICLR (2 AI Safety papers today and 1 on efficient CL tomorrow).

We also have a workshop paper on the universal approximation of prompting on Saturday.

Congrats to all student authors <a href="/AleksPPetrov/">Aleks Petrov</a> <a href="/frtimlive/">Tim Franzmeyer</a> and Wenxuan.
Tim Franzmeyer (@frtimlive) 's Twitter Profile Photo

📢 Introducing SelectToPerfect: A new method for imitating large mixed-behavior datasets! Today 11AM ICLR 2025 ! Poster #182 in Hall B! #ICLR2024 ❌Traditional methods blindly imitate all behaviors. ✅We identify agents with desired behaviors and selectively imitate! 🧵

Oxford Torr Vision Group (@oxfordtvg) 's Twitter Profile Photo

TVG's Constantin Venhoff Christian Schroeder de Witt Phil Torr and Prof. Ani Calinescu Oxford Comp Sci are excited to have been awarded an OpenAI superalignment fast grant ($327k) that will support TVG's AI safety agenda, and, specifically, fund our incoming DPhil student's Constantin's research agenda

Francisco Girbal Eiras (@fgirbal) 's Twitter Profile Photo

Gen AI is poised to transform many fields, sparking major debates over its risks & calls for tighter regulation. ❗Over-regulation could be catastrophic to open-source Gen AI. 🚀 Our paper (arxiv.org/pdf/2405.08597) argues the benefits of open-source Gen AI outweigh its risks. 🧵

Gen AI is poised to transform many fields, sparking major debates over its risks &amp; calls for tighter regulation.
❗Over-regulation could be catastrophic to open-source Gen AI.
🚀 Our paper (arxiv.org/pdf/2405.08597) argues the benefits of open-source Gen AI outweigh its risks. 🧵
Oxford Torr Vision Group (@oxfordtvg) 's Twitter Profile Photo

🤩We have an exciting opportunity to join TVG as a PDRA in Hydrogen Production Chemical Reaction Mechanisms. This project aims to utilise AI to autonomously identify chemical reaction mechanisms. Closing 21 June Apply here: tinyurl.com/rn8e34mv Oxford Chemistry

Oxford Torr Vision Group (@oxfordtvg) 's Twitter Profile Photo

🔥 #ECCV2024 Showcase your research on the Analysis and Evaluation of emerging VISUAL abilities and limits of foundation models 🔎🤖👁️ at the EVAL-FoMo workshop 🧠🚀✨ 🔗 sites.google.com/view/eval-fomo… Phillip Isola Saining Xie Christian Rupprecht Oxford Torr Vision Group Berkeley AI Research MIT CSAIL

🔥 #ECCV2024 Showcase your research on the Analysis and Evaluation of emerging VISUAL abilities and limits of foundation models 🔎🤖👁️ at the EVAL-FoMo workshop 🧠🚀✨
🔗 sites.google.com/view/eval-fomo…

<a href="/phillip_isola/">Phillip Isola</a> <a href="/sainingxie/">Saining Xie</a> <a href="/chrirupp/">Christian Rupprecht</a> <a href="/OxfordTVG/">Oxford Torr Vision Group</a> <a href="/berkeley_ai/">Berkeley AI Research</a> <a href="/MIT_CSAIL/">MIT CSAIL</a>
Aleks Petrov (@aleksppetrov) 's Twitter Profile Photo

Such exciting times with many new recurrent architectures like Mamba and Griffin and the resurgence of the classic RNN and (x)LSTM! But without attention, can they be good in-context learners? In a new paper, we prove that they can! 🧵

Such exciting times with many new recurrent architectures like Mamba and Griffin and the resurgence of the classic RNN and (x)LSTM! But without attention, can they be good in-context learners? In a new paper, we prove that they can! 🧵
Tim Franzmeyer (@frtimlive) 's Twitter Profile Photo

📢 Introducing HelloFresh: A Dynamic LLM Benchmark of Real-World Human Editorial Actions on X Community Notes and Wikipedia Edits. Can you beat GPT4 and GeminiPro at classifying X Community Notes and Wikipedia edits? Try our demo – shown in the video below – and see what

Francisco Girbal Eiras (@fgirbal) 's Twitter Profile Photo

📈 Task-specific fine-tuning allows LLMs to solve tasks more efficiently. ❌ Recent work shows fine-tuning on benign/adversarial benign-looking instruction-following data increases harmfulness. 🤔 Does this happen in the task-specific setting? If so, how can we mitigate it? 🧵

Puneet Dokania (@puneetdokania) 's Twitter Profile Photo

🚀Ever wondered what safety fine-tuning REALLY does to language models? How jailbreaking attacks bypass the safety of these models? ⭐️We believe we have some answers! Excited 2 share 🐣What Makes and Breaks Safety Fine-tuning? A Mechanistic Study 👉arxiv.org/abs/2407.10264

Francisco Girbal Eiras (@fgirbal) 's Twitter Profile Photo

The impact GenAI will have is highly dependent on the ability to open-source these models. Do the benefits provided outweigh the marginal risks incurred? 👉 Our #icml2024 Oral position paper argues they overwhelmingly do! 🗓️ Drop by Oral 2B (Hall A1) on 23rd of July @ 5pm.

The impact GenAI will have is highly dependent on the ability to open-source these models. Do the benefits provided outweigh the marginal risks incurred?

👉 Our #icml2024 Oral position paper argues they overwhelmingly do!
🗓️ Drop by Oral 2B (Hall A1) on 23rd of July @ 5pm.
Francesco Pinto @ICML (@frapintoml) 's Twitter Profile Photo

1/3,🧪🤖 What's the best way to improve model robustness to distribution shift using synthetic data? 💪 Come to Hall C 4-9 #912 #ICML2024 to find out! 💥Classifiers fail to recognise objects observed in previously unseen settings. 🧪 Can #StableDiffusion be used to fix this?

1/3,🧪🤖 What's the best way to improve model robustness to distribution shift using synthetic data? 💪 Come to Hall C 4-9 #912 #ICML2024 to find out! 

💥Classifiers fail to recognise objects observed in previously unseen settings.
🧪 Can #StableDiffusion be used to fix this?
Francesco Pinto @ICML (@frapintoml) 's Twitter Profile Photo

(1/3)🔥Multi-Modal LLMs (MLLMs) can respond to questions about document scans. How safe are they? Come at Hall C #2300 1.30pm to find out! 🧠Attackers may successfully query MLLMs to extract Personally Identifying Information! 🚨 arxiv.org/abs/2407.08707

(1/3)🔥Multi-Modal LLMs (MLLMs) can respond to questions about document scans. How safe are they? Come at Hall C #2300 1.30pm to find out!

🧠Attackers may successfully query MLLMs to extract Personally Identifying Information! 🚨
arxiv.org/abs/2407.08707
Adel Bibi (@adel_bibi) 's Twitter Profile Photo

I’m very excited about this new work on LLM safety alignment. 🙂 RLHF can lead to overly conservative models—essentially, models that refuse from responding to even safe requests. While such a model might be "safe," it’s not particularly useful. #AISafety #Alignment