Pierre Manceron (@phylliade) 's Twitter Profile
Pierre Manceron

@phylliade

@raidium_med

ID: 324584605

calendar_today26-06-2011 21:57:39

9 Tweet

68 Followers

976 Following

Eric W. Tramel (@fujikanaeda) 's Twitter Profile Photo

Another great moment for Owkin Lab! 👏Our Radiology Team wins again at JFR, this time applying their machine learning prowess to Sarcopenia screening. Great Work!🎉

Another great moment for Owkin Lab! 👏Our Radiology Team wins again at JFR, this time applying their machine learning prowess to Sarcopenia screening. Great Work!🎉
Eric W. Tramel (@fujikanaeda) 's Twitter Profile Photo

We love seeing all the progress on differential privacy & ML, but we were frustrated that there aren’t great efficient implementations of the application of DP to DCNNs; Ouch! We proposed today some techniques that can speed things up a bit in PyTorch: arxiv.org/abs/1912.06015

Owkin (@owkinscience) 's Twitter Profile Photo

Owkin listed for the second year in a row in CB Insights #AI100 list of #AI startups redefining industries! 🎉 It's awesome to be included among so many global key players. #medicalresearch #AIforhealthcare #GoOwkin cbinsights.com/research/artif…

Owkin listed for the second year in a row in <a href="/CBinsights/">CB Insights</a> #AI100 list of #AI startups redefining industries! 🎉 It's awesome to be included among so many global key players. #medicalresearch #AIforhealthcare #GoOwkin 
cbinsights.com/research/artif…
Raidium (@raidium_med) 's Twitter Profile Photo

Natural language has GPT3, Programming has OpenAI Codex, DALL-E has CLIP: Radiology needs a foundation model, to empower the radiologist with a new generation of AI assistance. A short read about foundation models medium.com/raidium/is-ai-… #foundationmodel #radiology #gpt3

Simon Jegou (@simon_jegou) 's Twitter Profile Photo

🤔 I noticed that the head of a GPT-like model can be applied to its intermediate layers to predict the next token. We can even determine where to stop inference threshold on the maximum logits. Is this a widely-known phenomenon? Could this approach speed up inference ? (1/4)

🤔 I noticed that the head of a GPT-like model can be applied to its intermediate layers to predict the next token. We can even determine where to stop inference  threshold on the maximum logits. Is this a widely-known phenomenon? Could this approach speed up inference ? (1/4)