Timothy Hospedales (@tmh31) 's Twitter Profile
Timothy Hospedales

@tmh31

Professor @ University of Edinburgh.
Head of Samsung AI Research Centre, Cambridge.

ID: 62544367

linkhttp://homepages.inf.ed.ac.uk/thospeda/ calendar_today03-08-2009 15:49:00

144 Tweet

823 Takipçi

112 Takip Edilen

Dmytro Mishkin 🇺🇦 (@ducha_aiki) 's Twitter Profile Photo

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks Linus Ericsson, Henry Gouk, Timothy Hospedales tl;dr: color augmentation helps self-super MoCo2 pose estimation. arxiv.org/abs/2111.11398 1/

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

<a href="/sillinuss/">Linus Ericsson</a>, <a href="/henrygouk/">Henry Gouk</a>, <a href="/tmh31/">Timothy Hospedales</a> 

tl;dr: color augmentation helps self-super MoCo2 pose estimation. 
arxiv.org/abs/2111.11398

1/
Rui Li (@ruiruiliii) 's Twitter Profile Photo

On this Friday, I will present our paper ‘A Channel Coding Benchmark for Meta-Learning’ at #NeurIPS2021 Benchmark and Dataset Track. Checkout the preprint at: arxiv.org/abs/2107.07579 Credits to: Ondrej Bohdal, Hyeji Kim, Rajesh Mishra, Da Li, Nicholas Lane, and Timothy Hospedales

On this Friday, I will present our paper ‘A Channel Coding Benchmark for Meta-Learning’ at #NeurIPS2021 Benchmark and Dataset Track. Checkout the preprint at: arxiv.org/abs/2107.07579 Credits to: Ondrej Bohdal, Hyeji Kim, Rajesh Mishra, Da Li, Nicholas Lane, and Timothy Hospedales
Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Would you like to learn how to make meta-learning more scalable? We’ll be presenting EvoGrad at Poster Session 1 at #NeurIPS today - starting from 16:30 UTC. Joint work with Yongxin Yang and Timothy Hospedales.

Would you like to learn how to make meta-learning more scalable? We’ll be presenting EvoGrad at Poster Session 1 at #NeurIPS today - starting from 16:30 UTC. Joint work with Yongxin Yang and Timothy Hospedales.
Lucas Deecke (@ldeecke) 's Twitter Profile Photo

Presenting our paper on a new transfer learning setting called “latent domain learning” at #ICLR2022’s poster session 5 tomorrow (10:30am BST). openreview.net/pdf?id=kG0AtPi… — joint work with Timothy Hospedales and Hakan Bilen, hope to see you there!

Michael Burke (@mgb_infers) 's Twitter Profile Photo

I'll be presenting some work on Vision based key point discovery and system identification at #l4dc2022 this Friday. proceedings.mlr.press/v168/jaques22a… - Work led by Miguel Jaques with Martin Asenov Timothy Hospedales

I'll be presenting some work on Vision based key point discovery and system identification at #l4dc2022 this Friday. proceedings.mlr.press/v168/jaques22a…  - Work led by <a href="/migJaques/">Miguel Jaques</a> with <a href="/masenov1/">Martin Asenov</a> <a href="/tmh31/">Timothy Hospedales</a>
Timothy Hospedales (@tmh31) 's Twitter Profile Photo

What happens when few-shot meta-learning meets foundation models? Check our paper with Shell Xu Hu at CVPR'22 New Orleans today. hushell.github.io/pmf/

What happens when few-shot meta-learning meets foundation models? Check our paper with <a href="/shelling343/">Shell Xu Hu</a>   at CVPR'22 New Orleans today. hushell.github.io/pmf/
Timothy Hospedales (@tmh31) 's Twitter Profile Photo

My Lab at Samsung AI Research Cambridge is #hiring research scientist and ML research engineer positions. Skilled in meta-learning, neuro-symbolic, foundation models, vision and language, robot learning, on-device learning? Apply online sec.wd3.myworkdayjobs.com/Samsung_Careers #MachineLearning

My Lab at Samsung AI Research Cambridge is #hiring research scientist and ML research engineer positions. Skilled in meta-learning, neuro-symbolic, foundation models, vision and language, robot learning, on-device learning? Apply online sec.wd3.myworkdayjobs.com/Samsung_Careers
#MachineLearning
Henry Gouk (@henrygouk) 's Twitter Profile Photo

Hello, everyone! We will be organizing an online workshop at ICLR 2023 aimed at one question: What do we need for successful domain generalization? The workshop will include invited talks from David Lopez-Paz, Amos Storkey, Tatiana, and Lequan Yu 1/2

Hello, everyone!

We will be organizing an online workshop at ICLR 2023 aimed at one question: What do we need for successful domain generalization?

The workshop will include invited talks from David Lopez-Paz, <a href="/AmosStorkey/">Amos Storkey</a>, <a href="/tommasi_tatiana/">Tatiana</a>, and <a href="/ylqzd2011/">Lequan Yu</a> 

1/2
Raman Dutt (@ramandutt4) 's Twitter Profile Photo

🚨Parameter Efficient Fine-Tuning has been well researched for NLP, vision, and cross-modal tasks. So why should MedAI be left behind? Presenting the first evaluation on PEFT for medical AI - arxiv.org/abs/2305.08252 16 PEFT methods, 5 datasets including a text-to-image task 🔥

Edinburgh Vision (@edinburghvision) 's Twitter Profile Photo

Meta Omnium is a multi-task few-shot learning benchmark to evaluate generalization across CV tasks. Work by Ondrej Bohdal Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Timothy Hospedales, will be presented on 20/6 afternoon. Project page: edi-meta-learning.github.io/meta-omnium/

Meta Omnium is a multi-task few-shot learning benchmark to evaluate generalization across CV tasks. Work by <a href="/OBohdal/">Ondrej Bohdal</a> <a href="/EricTian1102/">Yinbing Tian</a>, <a href="/yongshuozong/">Yongshuo Zong</a>, <a href="/chavhan_ruchika/">Ruchika Chavhan</a>, <a href="/dali_academic/">Da Li</a>, <a href="/henrygouk/">Henry Gouk</a>, <a href="/tmh31/">Timothy Hospedales</a>, will be presented on 20/6 afternoon. Project page: edi-meta-learning.github.io/meta-omnium/
Timothy Hospedales (@tmh31) 's Twitter Profile Photo

Excited to give our #CVPR2026 tutorial on few-shot learning today! Together with Fu Yanwei. Room East 5 starting 9AM PDT! #CVPR2023 . fsl-fudan.github.io

Timothy Hospedales (@tmh31) 's Twitter Profile Photo

Interested in practical uncertainty quantification? Our new Bayesian NN library from Samsung AI Cambridge scales to large VITs! One line of code wraps any architecture with no modifying your model definition! arxiv.org/abs/2309.12928 github.com/SamsungLabs/Ba… Minyoung Kim

Timothy Hospedales (@tmh31) 's Twitter Profile Photo

Excited to have been part of DemoFusion, bringing UHD generation to SDXL on your desktop with no training! With Ruoyi Du Yi-Zhe Song Dongliang Chang Project: ruoyidu.github.io/demofusion/dem…, paper: arxiv.org/abs/2311.16973 #GenerativeAI

Ruoyi Du (@ruoyidu) 's Twitter Profile Photo

💰DemoFusion: High-resolution generation using only SDXL and a RTX 3090 GPU! ... is now available in 🧨diffusers as a community pipeline! Check it out: github.com/huggingface/di… Project Page: ruoyidu.github.io/demofusion/dem… #generativeAI #ImageGeneration #diffusionmodels

Radamés Ajna (@radamar) 's Twitter Profile Photo

Here's the demo "Enhance This"! It's a surreal image magnifier that creates a high-res version by imagining new details, using the SDXL base model. Thanks to Ruoyi Du's DemoFusion research. It takes a ~minute to generate a 2024x2024 image. huggingface.co/spaces/radames…