Markus Hofmarcher (@mrkhof) 's Twitter Profile
Markus Hofmarcher

@mrkhof

PhD student @ JKU Linz, Institute for Machine Learning

ID: 1070444434083459073

calendar_today05-12-2018 22:26:55

24 Tweet

96 Followers

18 Following

Forest (@forestapp_cc) 's Twitter Profile Photo

【Final Sprint: #1MTreeChallenge】 Forest has in total planted 980 thousand trees and is about to hit 1 million now! Let’s cross this milestone together: we will donate 1 tree for every 10 Likes or 1 Retweet of this tweet.🌲 Save the Earth at your fingertips🌏!

【Final Sprint: #1MTreeChallenge】
Forest has in total planted 980 thousand trees and is about to hit 1 million now!
Let’s cross this milestone together: we will donate 1 tree for every 10 Likes or 1 Retweet of this tweet.🌲

Save the Earth at your fingertips🌏!
Johannes Brandstetter (@jo_brandstetter) 's Twitter Profile Photo

Our paper "Hopfield Networks is All You Need" is accepted at #ICLR2021. Time to give some talks :) I am very honored to present our research today at the great platform of ML Collective Rosanne Liu (mlcollective.org/dlct/).

Our paper "Hopfield Networks is All You Need" is accepted at #ICLR2021. Time to give some talks :) I am very honored to present our research today at the great platform of <a href="/ml_collective/">ML Collective</a> <a href="/savvyRL/">Rosanne Liu</a> (mlcollective.org/dlct/).
Johannes Brandstetter (@jo_brandstetter) 's Twitter Profile Photo

Wow, wanna see how to beat CLIP with the new CLOOB? Fantastic work lead by my colleagues Andreas and Elisabeth Rumetshofer (Sepp Hochreiter's group) applying modern Hopfield networks to image-text data. Paper: arxiv.org/abs/2110.11316 Blogpost: ml-jku.github.io/cloob

Wow, wanna see how to beat CLIP with the new CLOOB? Fantastic work lead by my colleagues <a href="/fuerst_andreas/">Andreas</a> and <a href="/LizRumetshofer/">Elisabeth Rumetshofer</a> (Sepp Hochreiter's group) applying modern Hopfield networks to image-text data.

Paper: arxiv.org/abs/2110.11316
Blogpost: ml-jku.github.io/cloob
Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Excited to share our work on history compression via language models in RL, presented at #ICML2022🤩🤩. Our novel framework HELM⎈ augments an agent with a history compression module which leverages a pretrained language Transformer without any training or finetuning 🤯🤯 1/5

Bernhard Schäfl (@bschaefl) 's Twitter Profile Photo

Deep Learning on exceptionally small tabular datasets⁉️With Hopular it is now possible❗Hopular surpasses Gradient Boosting (e.g. XGBoost), Random Forests, and SVMs on tabular data. 🤯 🧵 (1/3)

Marius-Constantin Dinu (@dinumariusc) 's Twitter Profile Photo

We are excited to present our work, combining the power of a symbolic approach and Large Language Models (LLMs). Our Symbolic API bridges the gap between classical programming (Software 1.0) and differentiable programming (Software 2.0). GitHub: github.com/Xpitfire/symbo… [1/n]

We are excited to present our work, combining the power of a symbolic approach and Large Language Models (LLMs). Our Symbolic API bridges the gap between classical programming (Software 1.0) and differentiable programming (Software 2.0). GitHub: github.com/Xpitfire/symbo… [1/n]
Marius-Constantin Dinu (@dinumariusc) 's Twitter Profile Photo

This includes fact-based generation of text, flow control of a generative process towards a desired outcome, and interpretability within generative processes. GitHub: github.com/Xpitfire/symbo… [5/n]

Johannes Schimunek (@jschimunek) 's Twitter Profile Photo

🚀 Excited to share our #ICLR2023 work on 🚨 context-enriched molecule representations🚦 improve few-shot drug discovery 💊 🚨 Paper: openreview.net/pdf?id=XrMWUuE… App: HuggingFace 🤗 under prep! #ICLR2023 🧑‍💼 poster 🗨: iclr.cc/virtual/2023/p… ⏰ Wed 3 May 4:30 pm - 6:30 pm CAT

🚀 Excited to share our #ICLR2023 work on
🚨 context-enriched molecule representations🚦 improve few-shot drug discovery 💊 🚨

Paper: openreview.net/pdf?id=XrMWUuE…
App: HuggingFace 🤗 under prep!

#ICLR2023 🧑‍💼 poster 🗨: 
iclr.cc/virtual/2023/p…
⏰ Wed 3 May 4:30 pm - 6:30 pm CAT
Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Excited to share our latest work on a semantic and interpretable memory module for RL! Complementary to recent developments in the realm of explainable AI, we focus on interpretability w.r.t. the memory of an agent. 1/n

Excited to share our latest work on a semantic and interpretable memory module for RL! Complementary to recent developments in the realm of explainable AI, we focus on interpretability w.r.t. the memory of an agent.
1/n
Thomas Schmied (@thsschmied) 's Twitter Profile Photo

Excited to share our recent work on parameter-efficient fine-tuning in RL. We pre-train a Decision Transformer (DT) on 50 tasks from two domains, and subsequently fine-tune on various down-stream tasks. Joint work with Markus Hofmarcher, Fabian Paischer, Razvan, and Sepp Hochreiter. 1/n

Excited to share our recent work on parameter-efficient fine-tuning in RL.  We pre-train a Decision Transformer (DT) on 50 tasks from two domains, and subsequently fine-tune on various down-stream tasks.  Joint work with <a href="/mrkhof/">Markus Hofmarcher</a>, <a href="/PaischerFabian/">Fabian Paischer</a>, Razvan, and <a href="/HochreiterSepp/">Sepp Hochreiter</a>.
1/n
Kajetan Schweighofer (@kschweig_) 's Twitter Profile Photo

🚀 Excited to share our latest research on quantifying the predictive uncertainty of machine learning models. QUAM searches for adversarial models (not adversarial examples!) to better estimate the epistemic uncertainty, the uncertainty about chosen model parameters. 1/5

🚀 Excited to share our latest research on quantifying the predictive uncertainty of machine learning models. QUAM searches for adversarial models (not adversarial examples!) to better estimate the epistemic uncertainty, the uncertainty about chosen model parameters.
1/5
AK (@_akhaliq) 's Twitter Profile Photo

SITTA: A Semantic Image-Text Alignment for Image Captioning paper page: huggingface.co/papers/2307.05… Textual and semantic comprehension of images is essential for generating proper captions. The comprehension requires detection of objects, modeling of relations between them, an

SITTA: A Semantic Image-Text Alignment for Image Captioning

paper page: huggingface.co/papers/2307.05…

Textual and semantic comprehension of images is essential for generating proper captions. The comprehension requires detection of objects, modeling of relations between them, an
Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Thanks AK for sharing! SITTA unlocks zero-shot image captioning via a generative language model by aligning its embedding space with that of a pretrained vision encoder without any access to gradient information. 1/6

Thanks <a href="/_akhaliq/">AK</a> for sharing!

SITTA unlocks zero-shot image captioning via a generative language model by aligning its embedding space with that of a pretrained vision encoder without any access to gradient information. 

1/6
Johannes Brandstetter (@jo_brandstetter) 's Twitter Profile Photo

Personal update: last month, I re-joined the group of my mentor Sepp Hochreiter and my amazing colleague Günter Klambauer in Linz, opening my own group "AI for data-driven simulations". We all share the vision to create a large-scale AI ecosystem in Linz. Big news to come soon 🚀

Personal update: last month, I re-joined the group of my mentor <a href="/HochreiterSepp/">Sepp Hochreiter</a> and my amazing colleague <a href="/gklambauer/">Günter Klambauer</a> in Linz, opening my own group "AI for data-driven simulations". We all share the vision to create a large-scale AI ecosystem in Linz. Big news to come soon 🚀
Elisabeth Rumetshofer (@lizrumetshofer) 's Twitter Profile Photo

🎉 Exciting news! Our latest work has been published in Nature Communications. 🎉 CLOOME utilizes contrastive learning to connect microscopy images and chemical structures, paving the way for major advancements in drug discovery and beyond.🌟🔬💊 📜nature.com/articles/s4146…

Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Interested in a semantic memory for reinforcement learning? I was recently invited to a podcast talking about our #NeurIPS2023 paper: Semantic HELM (arxiv.org/abs/2306.09312). In case you are interested, you can stream the episode here: open.spotify.com/episode/4n2lmC…

Marius-Constantin Dinu (@dinumariusc) 's Twitter Profile Photo

🚀 SymbolicAI – a framework for logic-based approaches combining generative models and solvers. Alongside, we introduce a benchmark and empirical measure to evaluate SOTA LLMs in AI-centric workflows. Read more in our paper arxiv.org/abs/2402.00854 #MachineLearning 🧠💡[1/n]

Sepp Hochreiter (@hochreitersepp) 's Twitter Profile Photo

I am so excited that xLSTM is out. LSTM is close to my heart - for more than 30 years now. With xLSTM we close the gap to existing state-of-the-art LLMs. With NXAI we have started to build our own European LLMs. I am very proud of my team. arxiv.org/abs/2405.04517

Marius-Constantin Dinu (@dinumariusc) 's Twitter Profile Photo

Excited to present our work “Large Language Models Can Self-Improve At Web Agent Tasks”. We show that synthetic data self-improvement boosts task completion by 31% on WebArena and introduce quality metrics for measuring autonomous agent workflows. #AI #MachineLearning #LLMs [1/n]

Excited to present our work “Large Language Models Can Self-Improve At Web Agent Tasks”. We show that synthetic data self-improvement boosts task completion by 31% on WebArena and introduce quality metrics for measuring autonomous agent workflows. #AI #MachineLearning #LLMs [1/n]