Reinhard Heckel (@heckelreinhard) 's Twitter Profile
Reinhard Heckel

@heckelreinhard

Associate Professor at Technical University of Munich and Adjunct Faculty at Rice University

ID: 1105251371220004864

calendar_today11-03-2019 23:37:16

58 Tweet

469 Takipçi

303 Takip Edilen

Frankfurter Allgemeine gesamt (@faz_net) 's Twitter Profile Photo

Ja, für beeindruckende Künstliche Intelligenz braucht es gewaltige Datenmengen und enorme Rechenleistung. Doch das alleine reicht nicht. Was jeder jetzt über diese Technologie wissen muss. Ein Gastbeitrag von Reinhard Heckel faz.net/aktuell/wirtsc…

Efrat Shimron (@efrat_shimron) 's Twitter Profile Photo

Exciting news: our k-band paper is out! arxiv.org/abs/2308.02958 k-band is a framework for self-supervised MRI reconstruction w/o fully sampled high-res data. Very proud of our team's work! Co-first authors: Frederic Wang & Han Qi. Colleagues Alfredo De Goyeneche Reinhard Heckel, Miki Lustig

Exciting news: our k-band paper is out! arxiv.org/abs/2308.02958
k-band is a framework for self-supervised MRI reconstruction w/o fully sampled high-res data. 
Very proud of our team's work! Co-first authors: Frederic Wang &amp; Han Qi. Colleagues <a href="/adegoyeneche/">Alfredo De Goyeneche</a> <a href="/HeckelReinhard/">Reinhard Heckel</a>, <a href="/sm313/">Miki Lustig</a>
Mikel Hernaez (@mikelhernaez) 's Twitter Profile Photo

Join us at the Graph ML Meetup in Madrid of the Learning on Graphs Conference 2025 from November 27th to 29th, 2023! Keynote talks by Reinhard Heckel ,Ivan Dokmanić , and Xiaowen Dong (xiaowen dong), along with talks and poster sessions. The call is open until November 8th! logmeetupmadrid.github.io

samir gadre (@sy_gadre) 's Twitter Profile Photo

sharing some highlights from our recent paper: language models scale reliably with over-training and on downstream tasks! arxiv: arxiv.org/abs/2403.08540 104 models, 11M to 7B parameters, varying numbers of tokens, 3 datasets, eval’d on 46 tasks: github.com/mlfoundations/… 1/11

sharing some highlights from our recent paper: language models scale reliably with over-training and on downstream tasks!
arxiv: arxiv.org/abs/2403.08540

104 models, 11M to 7B parameters, varying numbers of tokens, 3 datasets, eval’d on 46 tasks: github.com/mlfoundations/…

1/11
BMBF (@bmbf_bund) 's Twitter Profile Photo

Dieser Tintenfüller hat es in sich – das #Grundgesetz: In der Tinte ist #DNA, in der unsere #Verfassung codiert ist. Bundesforschungsministerin Bettina Stark-Watzinger nahm das Geschenk des Kunstprojekts „DNA unserer Verfassung“ begeistert entgegen. #75JahreGrundgesetz

Dieser Tintenfüller hat es in sich – das #Grundgesetz: In der Tinte ist #DNA, in der unsere #Verfassung codiert ist. Bundesforschungsministerin <a href="/starkwatzinger/">Bettina Stark-Watzinger</a> nahm das Geschenk des Kunstprojekts „DNA unserer Verfassung“ begeistert entgegen. #75JahreGrundgesetz
Mahdi Soltanolkotabi (@mahdisoltanol) 's Twitter Profile Photo

🚨 Introducing MediConfusion: A new challenging VQA benchmark for Medical MLLMs! 🚨 All available models score below random guessing on MediConfusion, raising serious concerns about their reliability for healthcare deployment. with Shahab Zalan Fabian Maryam Soltanolkotabi 🧵 1/6

🚨 Introducing MediConfusion: A new challenging VQA benchmark for Medical MLLMs! 🚨
All available models score below random guessing on MediConfusion, raising serious concerns about their reliability for healthcare deployment.
with <a href="/shahabsepehri/">Shahab</a> <a href="/zalan_fabian/">Zalan Fabian</a> <a href="/MSKMarSol/">Maryam Soltanolkotabi</a> 
🧵 1/6
Ryan Marten (@ryanmart3n) 's Twitter Profile Photo

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals. We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals.

We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data
Negin Raoof (@neginraoof_) 's Twitter Profile Photo

How can we make a better TerminalBench agent? Today, we are announcing the OpenThoughts-Agent project. OpenThoughts-Agent v1 is the first TerminalBench agent trained on fully open curated SFT and RL environments. OpenThinker-Agent-v1 is the strongest model of its size on

How can we make a better TerminalBench agent?
Today, we are announcing the OpenThoughts-Agent project. 
OpenThoughts-Agent v1 is the first TerminalBench agent trained on fully open curated SFT and RL environments.
OpenThinker-Agent-v1 is the strongest model of its size on