Varun Chandrasekaran(@VarunChandrase3) 's Twitter Profileg
Varun Chandrasekaran

@VarunChandrase3

Opinions are my own. Professing @ECEILLINOIS. Alumnus: @MSFTResearch @WisconsinCS, @nyuniversity. Interested in *most* things S&P. he/him

ID:1037032402403708929

linkhttp://pages.cs.wisc.edu/~chandrasekaran/ calendar_today04-09-2018 17:39:26

2,0K Tweets

828 Followers

350 Following

joyojeet pal(@joyopal) 's Twitter Profile Photo

I am seeking a paid research collaborators for two months to study AI-generated misinformation in India.

If you have worked on creating such misinformation, or have experience studying deepfakes and/or folks who generate such content, please contact me.

account_circle
Sebastian Bordt(@s_bordt) 's Twitter Profile Photo

Should we trust LLM evaluations on publicly available benchmarks?๐Ÿค”

Our latest work studies the overfitting of few-shot learning with GPT-4.

with Harsha Nori Vanessa Rodrigues Besmira Nushi ๐Ÿ’™๐Ÿ’› and Rich Caruana

Paper: arxiv.org/abs/2404.06209

More details๐Ÿ‘‡ [1/N]

Should we trust LLM evaluations on publicly available benchmarks?๐Ÿค” Our latest work studies the overfitting of few-shot learning with GPT-4. with @HarshaNori Vanessa Rodrigues @besanushi and Rich Caruana Paper: arxiv.org/abs/2404.06209 More details๐Ÿ‘‡ [1/N]
account_circle
Jesse Dodge(@JesseDodge) 's Twitter Profile Photo

Today Meta released Llama 3! Congrats to the team.

In their blog post they wrote that, 'the curation of a large, high-quality training dataset is paramount', while providing almost no information about how it was made, how it was filtered, or its contents.

Today Meta released Llama 3! Congrats to the team. In their blog post they wrote that, 'the curation of a large, high-quality training dataset is paramount', while providing almost no information about how it was made, how it was filtered, or its contents.
account_circle
Angus Nicolson(@angusjnic) 's Twitter Profile Photo

๐Ÿ“ข New Paper Alert! ๐Ÿ“ 'Explaining Explainability: Understanding Concept Activation Vectors' ๐Ÿ“„arxiv.org/abs/2404.03713

What does it mean to represent a concept as a vector?

We explore three key properties of concept vectors and how they can affect model interpretations. ๐Ÿงต(1/5)

๐Ÿ“ข New Paper Alert! ๐Ÿ“ 'Explaining Explainability: Understanding Concept Activation Vectors' ๐Ÿ“„arxiv.org/abs/2404.03713 What does it mean to represent a concept as a vector? We explore three key properties of concept vectors and how they can affect model interpretations. ๐Ÿงต(1/5)
account_circle
Juho Kim(@imjuhokim) 's Twitter Profile Photo

<Summer Research Internship at KIXLAB>
I'm looking for undergrad research interns to join my research group (kixlab.org) this summer. Most projects this round are about human-AI interaction. Please share broadly!

juhokim.com/2024-summer-inโ€ฆ

account_circle
Besmira Nushi ๐Ÿ’™๐Ÿ’›(@besanushi) 's Twitter Profile Photo

github.com/microsoft/mechโ€ฆ very excited about releasing our new repo on mechanistically understanding factual errors of LLMs. This is also the codebase for the new paper 'Attention satisfies: A constraint-satisfaction lens on factual errors of language models' by

github.com/microsoft/mechโ€ฆ very excited about releasing our new repo on mechanistically understanding factual errors of LLMs. This is also the codebase for the new #ICLR2024 paper 'Attention satisfies: A constraint-satisfaction lens on factual errors of language models' by
account_circle
Owain Evans(@OwainEvans_UK) 's Twitter Profile Photo

Full lecture slides and reading list for Roger Grosse's class on AI Alignment are up:
alignment-w2024.notion.site

Full lecture slides and reading list for Roger Grosse's class on AI Alignment are up: alignment-w2024.notion.site
account_circle
Giancarlo Pellegrino(@tgianko) 's Twitter Profile Photo

Lujo (Lujo Bauer) and I are seeking nominations for service on the program committee for USENIX Security '25. You may nominate yourself or someone else by Friday, May 24, 2024: forms.gle/VAWFhYzuBQo6kPโ€ฆ.

account_circle
PETS(@PET_Symposium) 's Twitter Profile Photo

Want to give a talk at HotPETS this year? Here's how to apply! petsymposium.org/2024/hotpets.pโ€ฆ

The deadline is May 14, 2024.

Hot Topics in Privacy Enhancing Technologies (HotPETs) is the discussion-oriented workshop at PETS.

Please retweet!

1/n

account_circle
Tom Gur(@TomGur) 's Twitter Profile Photo

Excited to share this work (to be presented in STOC 2024), which provides new and improved ways to delegate machine-learning tasks via PAC verification, a beautiful notion recently introduced by Goldwasser, Rothblum, Shafer, and Yehudayoff. 1/2

arxiv.org/abs/2404.08158โ€ฆ

Excited to share this work (to be presented in STOC 2024), which provides new and improved ways to delegate machine-learning tasks via PAC verification, a beautiful notion recently introduced by Goldwasser, Rothblum, Shafer, and Yehudayoff. 1/2 arxiv.org/abs/2404.08158โ€ฆ
account_circle
Nicolas Papernot(@NicolasPapernot) 's Twitter Profile Photo

The new SaTML pc chairs Konrad Rieck ๐ŸŒˆ and Somesh Jha are looking for a general chair & venue for the 2025 conference.

If you're interested in hosting the conference in April 2025 (the exact date/month is flexible), submit a bid here:

tinyurl.com/hostsatml

Soft deadline: May 15 2024

The new SaTML pc chairs @mlsec and @jhasomesh are looking for a general chair & venue for the 2025 conference. If you're interested in hosting the conference in April 2025 (the exact date/month is flexible), submit a bid here: tinyurl.com/hostsatml Soft deadline: May 15 2024
account_circle
Google AI(@GoogleAI) 's Twitter Profile Photo

Being able to interpret an modelโ€™s hidden representations is key to understanding its behavior. Today we introduce Patchscopes, an approach that trains to provide natural language explanations of their own hidden representations. Learn more โ†’ goo.gle/4aS5epd

Being able to interpret an #ML modelโ€™s hidden representations is key to understanding its behavior. Today we introduce Patchscopes, an approach that trains #LLMs to provide natural language explanations of their own hidden representations. Learn more โ†’ goo.gle/4aS5epd
account_circle
Aaditya Singh(@Aaditya6284) 's Twitter Profile Photo

In-context learning (ICL) circuits emerge in a phase change...

Excited for our new work 'What needs to go right for an induction head (IH)?' We present 'clamping', a method to causally intervene on dynamics, and use it to shed light on IH diversity + formation.

Read on ๐Ÿ”Žโฌ

account_circle
Aidan Gomez(@aidangomez) 's Twitter Profile Photo

Introducing Rerank 3! Our latest model focused on powering much more complex and accurate search.

It's the fastest, cheapest, and highest performance reranker that exists. We're really excited to see how this model influences RAG applications and search stacks.

Introducing Rerank 3! Our latest model focused on powering much more complex and accurate search. It's the fastest, cheapest, and highest performance reranker that exists. We're really excited to see how this model influences RAG applications and search stacks.
account_circle
Aran Komatsuzaki(@arankomatsuzaki) 's Twitter Profile Photo

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

repo: github.com/Luckfort/CD
abs: arxiv.org/abs/2404.07066

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers? repo: github.com/Luckfort/CD abs: arxiv.org/abs/2404.07066
account_circle
elvis(@omarsar0) 's Twitter Profile Photo

Aligning LLMs to Quote from Pre-Training Data

Interesting approach to enable verifiability in LLMs.

Proposes techniques to align LLMs to leverage memorized information quotes directly from pre-training data.

Looks like the alignment approach is not only able to generate

Aligning LLMs to Quote from Pre-Training Data Interesting approach to enable verifiability in LLMs. Proposes techniques to align LLMs to leverage memorized information quotes directly from pre-training data. Looks like the alignment approach is not only able to generate
account_circle