Ting Liu (@_tingliu) 's Twitter Profile
Ting Liu

@_tingliu

Researcher @GoogleDeepMind

ID: 105134945

linkhttp://tliu.org calendar_today15-01-2010 13:15:38

43 Tweet

225 Takipçi

414 Takip Edilen

Jennifer J. Sun (@jenjsun) 's Twitter Profile Photo

We are excited to release the dataset from the 2022 MABe Challenge! 🐭🪰 Our dataset consists of mouse (9 mil frames) and fly (4 mil frames) social interactions for studying behavioral representation learning! Paper: arxiv.org/pdf/2207.10553… Challenge: aicrowd.com/challenges/mul…

We are excited to release the dataset from the 2022 MABe Challenge! 🐭🪰

Our dataset consists of mouse (9 mil frames) and fly (4 mil frames) social interactions for studying behavioral representation learning!

Paper: arxiv.org/pdf/2207.10553…
Challenge: aicrowd.com/challenges/mul…
Belongie Lab (@belongielab) 's Twitter Profile Photo

1/3) What is the best way to adapt large pre-trained vision models to downstream tasks in terms of effectiveness and efficiency? Drawing inspiration from the recent advances on Prompting in NLP, we propose a new simple and efficient method: Visual Prompt Tuning (VPT) 👇

Jennifer J. Sun (@jenjsun) 's Twitter Profile Photo

I’m on the job market! I develop AI for scientists, to accelerate discovery from data & domain knowledge. My work tackles challenges from real-world workflows in domains such as neuroscience & healthcare, including annotation efficiency, interpretability & structure discovery.

Jason Wei (@_jasonwei) 's Twitter Profile Photo

Best AI skillset in 2018: PhD + long publication record in a specific area Best AI skillset in 2023: strong engineering abilities + adapting quickly to new directions without sunk cost fallacy Correct me if this is over-generalized, but this is what it seems like to me lately

Jennifer J. Sun (@jenjsun) 's Twitter Profile Photo

~1 month left to submit a paper to our workshop on Multi-Agent Behavior #CVPR2023! Come discuss multi-agent behavior, including biological and artificial agents, across wide ranges of spatial and temporal scales 🔬🐭🚶🪰🚗🏀🌍 Hope to see you in June!

Jeff Dean (@jeffdean) 's Twitter Profile Photo

Bard is now available in the US and UK, w/more countries to come. It’s great to see early Google AI work reflected in it—advances in sequence learning, large neural nets, Transformers, responsible AI techniques, dialog systems & more. You can try it at bard.google.com

Jon Barron (@jon_barron) 's Twitter Profile Photo

A bunch of people have requested the slides for my "Scholars & Big Models" CVPR workshop talk. I didn't have a script, but I wrote a rough version of what I probably said at the bottom of each slide. Feedback is welcome! jonbarron.info/data/cvpr2023_…

AK (@_akhaliq) 's Twitter Profile Photo

VideoGLUE: Video General Understanding Evaluation of Foundation Models paper page: huggingface.co/papers/2307.03… We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action

VideoGLUE: Video General Understanding Evaluation of Foundation Models

paper page: huggingface.co/papers/2307.03…

We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action
Honglu Zhou (@zhou_honglu) 's Twitter Profile Photo

📢 Our #SMART101 challenge is now open! 🎉 Join the brightest minds in multimodal reasoning and cognitive models of intelligence to drive AI progress. 🚀 Don't miss out! Challenge closes on Sept. 1. Winning teams will receive prizes! 🏆 eval.ai/web/challenges… #VLAR #ICCV2023 #AI

📢 Our #SMART101 challenge is now open! 🎉 Join the brightest minds in multimodal reasoning and cognitive models of intelligence to drive AI progress. 🚀 Don't miss out! Challenge closes on Sept. 1. Winning teams will receive prizes! 🏆 eval.ai/web/challenges…
#VLAR #ICCV2023 #AI
AK (@_akhaliq) 's Twitter Profile Photo

Google presents Video Instruction Tuning Distilling Vision-Language Models on Millions of Videos paper page: huggingface.co/papers/2401.06… Experiments show that a video-language dual-encoder model contrastively trained on these auto-generated captions is 3.8% better than the

Google presents Video Instruction Tuning

Distilling Vision-Language Models on Millions of Videos

paper page: huggingface.co/papers/2401.06…

Experiments show that a video-language dual-encoder model contrastively trained on these auto-generated captions is 3.8% better than the
Google AI (@googleai) 's Twitter Profile Photo

Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at goo.gle/49ltEXW

Google AI (@googleai) 's Twitter Profile Photo

Introducing Long Zhao, a Senior Research Scientist at Google, who worked to build VideoPrism: A Foundational Visual Encoder for Video Understanding. Read the blog to explore innovations in video understanding tasks and more →goo.gle/44vfn9D

Long Zhao (@garyzhao9012) 's Twitter Profile Photo

Happy to share our recent work "Epsilon-VAE", an effective autoencoder that turns single-step decoding into a multi-step probabilistic process. Please check our paper for more detailed results! arXiv page: arxiv.org/abs/2410.04081

Ting Liu (@_tingliu) 's Twitter Profile Photo

Introducing our latest work Video Creation by Demonstration, a novel video creation experience. Paper: arxiv.org/abs/2412.09551 Project: delta-diffusion.github.io Huggingface: huggingface.co/papers/2412.09…

Ting Liu (@_tingliu) 's Twitter Profile Photo

After over 15 months, we are excited to finally release VideoPrism! The model comes in two sizes, Base and Large, and the video encoders are available today at github.com/google-deepmin…. We are also working towards adding more support into the repository, please stay tuned.

Boqing Gong (@boqinggo) 's Twitter Profile Photo

Excited! VideoPrism-Base/Large are publicly available now: github.com/google-deepmin… Check it out if you need a versatile video encoder for video-language or video-native tasks. Feedback appreciated!

Google Research (@googleresearch) 's Twitter Profile Photo

At 4:00 today, stop by the #CVPR2025 Google booth where Ting Liu will demo a model for video creation by demonstration that can generate physically plausible video that continues naturally given a context scene. Find sample videos at delta-diffusion.github.io

Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Excited to share the release of VideoPrism! 🎥 📏Generate video embeddings 👀Useful for classifiers, video retrieval, and localization 🔧Adaptable for your tasks Model: hf.co/google/videopr… Paper: arxiv.org/abs/2402.13217 GitHub: github.com/google-deepmin…