Jonathan Funk (@jonathanfunk12) 's Twitter Profile
Jonathan Funk

@jonathanfunk12

PhD student building AI for Protein Design | Chemist by training 👨🏻‍🔬 | Techo-Optimist | Let‘s build a better future! 🇩🇪/🇵🇭

ID: 976178249322651651

calendar_today20-03-2018 19:26:45

259 Tweet

210 Followers

734 Following

Biology+AI Daily (@biologyaidaily) 's Twitter Profile Photo

Life as a Function: Why Transformer Architectures Struggle to Gain Genome-Level Foundational Capabilities 1. This study examines the limits of transformer architectures in capturing the foundational dynamics of genomic data, focusing on DNA sequences as functional outputs of

Life as a Function: Why Transformer Architectures Struggle to Gain Genome-Level Foundational Capabilities

1. This study examines the limits of transformer architectures in capturing the foundational dynamics of genomic data, focusing on DNA sequences as functional outputs of
William Gilpin (@wgilpin0) 's Twitter Profile Photo

In biology we measure downstream variables like genes, neurons, or species. But we can’t always measure their underlying causes. In Physical Review X I present a physics-based algorithm for learning causal drivers from time series (1/n) go.aps.org/3PwXhxe

Nature Biotechnology (@naturebiotech) 's Twitter Profile Photo

Deep learning methods aid in de novo design of proteins to neutralize lethal snake venom toxins in vitro and protect mice from a lethal neurotoxin challenge. nature.com/articles/s4158… #NBThighlight

Science Magazine (@sciencemagazine) 's Twitter Profile Photo

An #AI model created to design proteins simulates 500 million years of protein evolution in developing a previously unknown bright fluorescent protein. Learn more in a new Science study: scim.ag/4jhJ9Wa

An #AI model created to design proteins simulates 500 million years of protein evolution in developing a previously unknown bright fluorescent protein.

Learn more in a new Science study: scim.ag/4jhJ9Wa
Biology+AI Daily (@biologyaidaily) 's Twitter Profile Photo

Large Language Model is Secretly a Protein Sequence Optimizer 1/ This paper demonstrates that large language models (LLMs), originally trained on massive text datasets, can be effectively used as protein sequence optimizers. By integrating them into a directed evolutionary

Large Language Model is Secretly a Protein Sequence Optimizer

1/ This paper demonstrates that large language models (LLMs), originally trained on massive text datasets, can be effectively used as protein sequence optimizers. By integrating them into a directed evolutionary
Jorge Bravo (@bravo_abad) 's Twitter Profile Photo

Advancing enzyme engineering with an active learning method Many modern applications in catalysis, green chemistry, and biotechnology hinge on enzymes that are precisely tailored for specific tasks. Directed evolution has been a major driver of enzyme optimization, but

Advancing enzyme engineering with an active learning method

Many modern applications in catalysis, green chemistry, and biotechnology hinge on enzymes that are precisely tailored for specific tasks. Directed evolution has been a major driver of enzyme optimization, but
Samuel Hume (@drsamuelbhume) 's Twitter Profile Photo

Top 5 advances in medicine this week (🧵) 1. AI-designed snake anti-venoms These inhibit the 'three-finger toxins' that make the venom of snakes like cobras and mambas lethal - cheaper, with higher affinity and more scalability than current therapies nature.com/articles/s4158…

Top 5 advances in medicine this week (🧵)

1. AI-designed snake anti-venoms

These inhibit the 'three-finger toxins' that make the venom of snakes like cobras and mambas lethal - cheaper, with higher affinity and more scalability than current therapies

nature.com/articles/s4158…
Akshay 🚀 (@akshay_pachaar) 's Twitter Profile Photo

Build human-like memory for your Agents! Every agentic and RAG system struggles with real-time knowledge updates and fast data retrieval. Zep AI solves these issues with its continuously learning and temporally-aware Knowledge Graph—think of it as human memory for AI Agents.

Biology+AI Daily (@biologyaidaily) 's Twitter Profile Photo

DisDock: A Deep Learning Method for Metal Ion-Protein Redocking 1. DisDock introduces a deep learning model designed to predict protein-metal docking with high accuracy, particularly focusing on the interaction of metal ions with proteins using a distance-based approach. 2.

DisDock: A Deep Learning Method for Metal Ion-Protein Redocking

1. DisDock introduces a deep learning model designed to predict protein-metal docking with high accuracy, particularly focusing on the interaction of metal ions with proteins using a distance-based approach.

2.
VantAI (@vant_ai) 's Twitter Profile Photo

Announcing Neo-1: the world’s most advanced atomistic foundation model, unifying structure prediction and all-atom de novo generation for the first time - to decode and design the structure of life 🧵(1/10)

Clemens Isert (@clemensisert) 's Twitter Profile Photo

Really cool to see Neo-1 released. Latent diffusion to combine structure prediction and molecular generation across modalities. #ai #drugdiscovery #genai

Biology+AI Daily (@biologyaidaily) 's Twitter Profile Photo

De novo design of miniprotein agonists and antagonists targeting G protein-coupled receptors 🚀 New preprint from David Baker!🚀 1. This paper introduces a computational and experimental approach for designing miniproteins targeting G protein-coupled receptors (GPCRs) with high

De novo design of miniprotein agonists and antagonists targeting G protein-coupled receptors

🚀 New preprint from David Baker!🚀

1. This paper introduces a computational and experimental approach for designing miniproteins targeting G protein-coupled receptors (GPCRs) with high
Yehlin Cho (@choyehlin) 's Twitter Profile Photo

Excited to share our preprint “BoltzDesign1: Inverting All-Atom Structure Prediction Model for Generalized Biomolecular Binder Design” — a collaboration with Martin Pacesa, Zhidian Zhang , Bruno E. Correia, and Sergey Ovchinnikov. 🧬 Code will be released in a couple weeks

Furong Huang (@furongh) 's Twitter Profile Photo

🧠💡 What if your 7B model could beat GPT-4o and Qwen2.5-72B—using just 11k training samples? No distillation. No warm-start. Just smart data and reinforcement learning. Inspired by Moravec’s Paradox, we let the model decide what's actually hard. 🚨 New paper: "SoTA with Less: