Gregory Clark (@_whatcode) 's Twitter Profile
Gregory Clark

@_whatcode

Pretty good at tic-tac-toe.

ID: 766795616819097600

calendar_today20-08-2016 00:34:56

133 Tweet

87 Followers

310 Following

Karen X. Cheng (@karenxcheng) 's Twitter Profile Photo

AI Fashion Tutorial - A more detailed breakdown of yesterday's video. (Btw turn sound on for more context in the voiceover) #dalle2 #dalle #ArtificialIntelligence #digitalfashion #virtualfashion

Mohamed Sayed (@mohammedamr1) 's Twitter Profile Photo

๐Ÿ“ข Our #ECCV2022 paper (and code) on fast accurate depth estimation and reconstruction is out now! SimpleRecon: 3D Reconstruction without 3D Convolutions nianticlabs.github.io/simplerecon (1/4)

PBS Digital Studios (@pbsds) 's Twitter Profile Photo

What would happen if climate-related tipping points, were pushed beyond their limit? Host Maiya May looks at major tipping points, and what will happen if triggered. Some damages would include massive droughts, heat waves, food shortages, and much more. to.pbs.org/3D4WJcL

Srinath Sridhar (@drsrinathsridha) 's Twitter Profile Photo

Neural fields are emerging as useful signal representations in computer vision & beyond. Our full-day introductory #CVPR2026 tutorial on the topic is now public. Video: youtu.be/PeRRp1cFuH4 Slides: drive.google.com/drive/folders/โ€ฆ Web: neuralfields.cs.brown.edu/cvpr22

Neural fields are emerging as useful signal representations in computer vision &amp; beyond. Our full-day introductory <a href="/CVPR/">#CVPR2026</a> tutorial on the topic is now public.

Video: youtu.be/PeRRp1cFuH4
Slides: drive.google.com/drive/folders/โ€ฆ
Web: neuralfields.cs.brown.edu/cvpr22
Andrea Tagliasacchi ๐Ÿ‡จ๐Ÿ‡ฆ (@taiyasaki) 's Twitter Profile Photo

Sometime in the future, weโ€™ll have more 3D data than 2D. Until then, distilling 2D self-supervised understanding to 3D seems quite effective. Great work Vincent Sitzmann + team: pfnet-research.github.io/distilled-featโ€ฆ Also, parallel work (~identical) by Vedaldi + team: robots.ox.ac.uk/~vadim/n3f/

Andrea Tagliasacchi ๐Ÿ‡จ๐Ÿ‡ฆ (@taiyasaki) 's Twitter Profile Photo

๐Ÿ“ข๐Ÿ“ข๐Ÿ“ข ๐—”๐˜๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป ๐—•๐—ฒ๐—ฎ๐˜๐˜€ ๐—–๐—ผ๐—ป๐—ฐ๐—ฎ๐˜๐—ฒ๐—ป๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—–๐—ผ๐—ป๐—ฑ๐—ถ๐˜๐—ถ๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐—ก๐—ฒ๐˜‚๐—ฟ๐—ฎ๐—น ๐—™๐—ถ๐—ฒ๐—น๐—ฑ๐˜€ We ran A LOT of experiments to find the best way to make neural fields generalize... so you donโ€™t have to! arxiv.org/abs/2209.10684

๐Ÿ“ข๐Ÿ“ข๐Ÿ“ข ๐—”๐˜๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป ๐—•๐—ฒ๐—ฎ๐˜๐˜€ ๐—–๐—ผ๐—ป๐—ฐ๐—ฎ๐˜๐—ฒ๐—ป๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—–๐—ผ๐—ป๐—ฑ๐—ถ๐˜๐—ถ๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐—ก๐—ฒ๐˜‚๐—ฟ๐—ฎ๐—น ๐—™๐—ถ๐—ฒ๐—น๐—ฑ๐˜€

We ran A LOT of experiments to find the best way to make neural fields generalize... so you donโ€™t have to!
arxiv.org/abs/2209.10684
Yann LeCun (@ylecun) 's Twitter Profile Photo

People will soon realize that automated prompt engineering is akin to latent variable inference. Then they will realize that latent variable inference is akin to reasoning/planning. Then they will realize that transformers were missing that piece all along.

Hattie Zhou (@oh_that_hat) 's Twitter Profile Photo

Hear about our unpublished speculations and wild kanjectures with Kanjun ๐Ÿ™ and Josh Albrecht in this conversation ๐Ÿ‘‡๐Ÿ‘‡ It was so much fun to chat, thanks for having me! ๐Ÿ˜Š

Rosanne Liu (@savvyrl) 's Twitter Profile Photo

It's totally okay to not have any research ideas that excite you from time to time. Instead of having a structure that rewards continuous publication of mediocre ideas, researchers should be allowed to have "downtime," where they teach, mentor, write, and freely think.

Been Kim (@_beenkim) 's Twitter Profile Photo

Our paper on understanding AlphaZeroโ™Ÿ๏ธis now published at PNAS! pnas.org/doi/10.1073/pnโ€ฆ The paper "studies" AZ's internals and its behaviors in collaborations with @DeepMind and world chess champion VBKramnik. What did we learn? ๐Ÿงต

Thomas Miconi (@thomasmiconi) 's Twitter Profile Photo

Meta-Learning Task Generator: randomly generate an infinity of simple meta-reinforcement learning tasks. github.com/ThomasMiconi/Mโ€ฆ The space covered includes bandit tasks, the Harlow task, the 2-step task, mazes, and others. Also, it's easy to represent graphically:

Meta-Learning Task Generator: randomly generate an infinity of simple meta-reinforcement learning tasks.

github.com/ThomasMiconi/Mโ€ฆ

The space covered includes bandit tasks, the Harlow task, the 2-step task, mazes, and others.

Also, it's easy to represent graphically:
Amanda Bertsch (@abertsch72) 's Twitter Profile Photo

What if we could run Transformer models without worrying about context length? With our new work Unlimiformer, you can jailbreak your current models to use unlimited length inputs! Preprint: arxiv.org/abs/2305.01625 Thread ๐Ÿงต (1/6)

Simona Cristea (@simocristea) 's Twitter Profile Photo

New preprint on scGPT: first foundation large language model for single cell biology, pretrained on 10 million cells. Just as text is made of words, cells are characterized by genes Some thoughts on how cool this is & why it challenges the status quo of single cell analysis๐Ÿงต๐Ÿงต

New preprint on scGPT: first foundation large language model for single cell biology, pretrained on 10 million cells.

Just as text is made of words, cells are characterized by genes

Some thoughts on how cool this is &amp; why it challenges the status quo of single cell analysis๐Ÿงต๐Ÿงต
Eric Hedlin (@iamerichedlin) 's Twitter Profile Photo

Did you know that you can estimate correspondences with Stable Diffusion in an unsupervised manner? Our new method surpasses weakly supervised methods and closes the gap to strongly supervised methods. Our new paper shows you how! Paper: arxiv.org/abs/2305.15581 See thread โฌ‡๏ธ

Noam Brown (@polynoamial) 's Twitter Profile Photo

Not easy, but if we can figure out how to extend this to LLMs the impact would be huge. Imagine having access to models that take 5 minutes to ponder each response but the output is as good as a model that's 1,000x larger and trained for 1,000x longer than GPT-4

Justine Moore (@venturetwins) 's Twitter Profile Photo

I think I've uncovered the next Turing test for AI video: writing. I've tried this prompt dozens of times on every model: "man writes 'hi' in chalk on blackboard" None can do it. Veo 2 (below) gets the closest. It's actually frustrating to watch!

Niko McCarty ๐Ÿงซ (@nikomccarty) 's Twitter Profile Photo

A non-hyped explainer of the โ€œcell simulationโ€ paper. The recent study about the โ€œ4Dโ€ simulation of a minimal cell has been getting a lot of attention on social media. Unfortunately, most posts about it have serious errors. Iโ€™ve seen people claim that the model simulates every

A non-hyped explainer of the โ€œcell simulationโ€ paper.

The recent study about the โ€œ4Dโ€ simulation of a minimal cell has been getting a lot of attention on social media. Unfortunately, most posts about it have serious errors. Iโ€™ve seen people claim that the model simulates every
Athul Paul Jacob (@apjacob03) 's Twitter Profile Photo

1/ We built a compiler that turns arbitrary programs into transformer weights. You give it C code, it produces a standard transformer that executes that program step by step.

1/ We built a compiler that turns arbitrary programs into transformer weights. You give it C code, it produces a standard transformer that executes that program step by step.