LIT AI Lab & ELLIS Unit Linz (@litailab) 's Twitter Profile
LIT AI Lab & ELLIS Unit Linz

@litailab

The LIT Lab is committed to scientific excellence. Our focus is on theoretical and experimental research in machine learning and artificial intelligence.

ID: 1198944544877924352

linkhttps://bit.ly/2DfsiRc calendar_today25-11-2019 12:41:26

208 Tweet

542 Takipçi

594 Takip Edilen

Adra | The AI, Data and Robotics Association (@adra_eu_) 's Twitter Profile Photo

📢 Join us on June 26, 11.00 – 12.00 CEST for Vision 2030: Strategic orientation towards #AI, #Data, #Robotics 2025-2027! 🌐 Learn about the #roadmap, explore key priorities, and get involved in shaping the future of new solutions to global challenges: t.ly/H1RIB

📢 Join us on June 26, 11.00 – 12.00 CEST for Vision 2030: Strategic  orientation towards #AI, #Data, #Robotics 2025-2027! 🌐 Learn about the #roadmap, explore key priorities, and get involved in shaping  the future of new solutions to global challenges: t.ly/H1RIB
Philipp Seidl (@phseidl) 's Twitter Profile Photo

🚀Exciting update on the🗜️CLAMP repo! Now you can train models and do 🧪linear probing. Plus, a new pretrained model on an updated PubChem dataset is coming soon! Boost your drug discovery prediction capabilities with CLAMP. Check it out github.com/ml-jku/clamp/t… #AI #DrugDiscovery

Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Excited to share our latest work on a semantic and interpretable memory module for RL! Complementary to recent developments in the realm of explainable AI, we focus on interpretability w.r.t. the memory of an agent. 1/n

Excited to share our latest work on a semantic and interpretable memory module for RL! Complementary to recent developments in the realm of explainable AI, we focus on interpretability w.r.t. the memory of an agent.
1/n
Sepp Hochreiter (@hochreitersepp) 's Twitter Profile Photo

Super excited about our adversarial models (not examples!) for uncertainty quantification. Model prediction is uncertain if other models with large posterior predict differently. Improved integral approximation by mixture importance sampling based on constraint optimization. Cool

Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Thanks AK for sharing! SITTA unlocks zero-shot image captioning via a generative language model by aligning its embedding space with that of a pretrained vision encoder without any access to gradient information. 1/6

Thanks <a href="/_akhaliq/">AK</a> for sharing!

SITTA unlocks zero-shot image captioning via a generative language model by aligning its embedding space with that of a pretrained vision encoder without any access to gradient information. 

1/6
ELLIS (@ellisforeurope) 's Twitter Profile Photo

Curious about the latest news & events in the ELLIS network? Then subscribe to our #newsletter and stay up to date on upcoming workshops, our #PhD Program, announcements of our units across Europe and job opportunities in modern #AI research! ➡️ ellis.eu/newsletter

Curious about the latest news &amp; events in the ELLIS network? Then subscribe to our #newsletter and stay up to date on upcoming workshops, our #PhD Program, announcements of our units across Europe and job opportunities in modern #AI research! 

➡️ ellis.eu/newsletter
Sepp Hochreiter (@hochreitersepp) 's Twitter Profile Photo

ArXiv arxiv.org/abs/2307.15818: Vision-language models trained on web data used for end-to-end robotic control. Generalization to novel objects, interpret novel commands, rudimentary reasoning, multi-stage semantic reasoning. Eg figuring out which object can be used as an hammer.

Sebastian (@sebsanokowski) 's Twitter Profile Photo

🧩 We overcome hurdles of sequential sampling with Subgraph Tokenization: our GNNs learn to predict good configurations of an entire subgraph instead of just a single node. This does not only boost the time efficiency but it is key in obtaining high quality solutions. 🚀

Fabian Paischer (@paischerfabian) 's Twitter Profile Photo

Interested in a semantic memory for reinforcement learning? I was recently invited to a podcast talking about our #NeurIPS2023 paper: Semantic HELM (arxiv.org/abs/2306.09312). In case you are interested, you can stream the episode here: open.spotify.com/episode/4n2lmC…

https://bsky.app/profile/aidd.bsky.social (@aiddone) 's Twitter Profile Photo

Slides of Igor Tetko's lecture at PharmaCampus University of Münster @PharmaCampus_MS are available at ai-dd.eu/news (see lecture overview at uni-muenster.de/Chemie.pz/phar…). Many thanks to Oliver Koch Koch Group and his group for hospitality!

Slides of Igor Tetko's lecture at PharmaCampus University of Münster @PharmaCampus_MS are available at ai-dd.eu/news (see lecture overview at uni-muenster.de/Chemie.pz/phar…). Many thanks to  Oliver Koch <a href="/kochgroup_ms/">Koch Group</a> and his group for hospitality!
Johannes Brandstetter (@jo_brandstetter) 's Twitter Profile Photo

We introduce Geometry-Informed Neural Networks to train shape generative models without any data (!!), combining learning under constraints, neural fields as a suitable representation, and generating diverse solutions to under-determined problems: 🖥️: arturs-berzins.github.io/GINN/