Patrick Schramowski (@schrame90) 's Twitter Profile
Patrick Schramowski

@schrame90

ID: 408718006

calendar_today09-11-2011 20:04:42

95 Tweet

156 Takipçi

63 Takip Edilen

LAION (@laion_ai) 's Twitter Profile Photo

HUGE THANKS to everyone who supported us! - Our LAION-5B paper has just received the Outstanding Paper Award at NeurIPS 2022. If you want to support our next large scale projects, join our Discord. 100% Open AI. From the Community, for the Community. discord.gg/7VkRyFtBuE

HUGE THANKS to everyone who supported us! - Our LAION-5B paper has just received the Outstanding Paper Award at NeurIPS 2022. If you want to support our next large scale projects, join our Discord. 100% Open AI. From the Community, for the Community. discord.gg/7VkRyFtBuE
Kristian Kersting (@kerstingaiml) 's Twitter Profile Photo

🙏 for pushing our work 👇 Two key lessons are: (1) having the human "artist" in the loop is advantageous, and (2) like word2vec diffusion allows for arithmetic! Here's the classic "king - male + female = queen"

🙏 for pushing our work 👇 Two key lessons are: (1) having the human "artist" in the loop is advantageous, and (2) like word2vec diffusion allows for arithmetic! Here's the classic "king - male + female = queen"
AK (@_akhaliq) 's Twitter Profile Photo

Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness abs: arxiv.org/abs/2302.10893 github: github.com/ml-research/Fa…

Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness 

abs: arxiv.org/abs/2302.10893 
github: github.com/ml-research/Fa…
Manuel Brack (@mbrack_aiml) 's Twitter Profile Photo

Very happy to share our latest paper on inappropriate degeneration in text-to-image models and its mitigation: arxiv.org/abs/2305.18398 In total, we evaluated over 1.5M images across 11 models and different mitigation strategies. Takeaways: 1/n 🧵 Center for Research on Foundation Models

LAION (@laion_ai) 's Twitter Profile Photo

We release LeoLM, a series of German speaking chat models based on Lama 2 7b and 13b. They preserve their ability to understand english text. We also also versions that can reply in English and in German & perform translation between these languages: laion.ai/blog/leo-lm/

Kristian Kersting (@kerstingaiml) 's Twitter Profile Photo

🥳Thirteen (13!) NeurIPS Conference 2023 papers! Super proud of our Cluster of Excellence initiative RAI — Reasonable #AI 🚀 Learning+Reasoning. Composable. Human-Like. Open. Decentgralized. Together

🥳Thirteen (13!) <a href="/NeurIPSConf/">NeurIPS Conference</a> 2023 papers! Super proud of our Cluster of Excellence initiative RAI — Reasonable #AI 🚀 Learning+Reasoning. Composable. Human-Like. Open. Decentgralized. Together
Mistral AI (@mistralai) 's Twitter Profile Photo

magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%https://t.co/g0m9cEUz0T%3A80%2Fannounce RELEASE a6bbd9affe0c2725c1b7410d66833e24

Together AI (@togethercompute) 's Twitter Profile Photo

Announcing StripedHyena 7B — an open source model using an architecture that goes beyond Transformers achieving faster performance and longer context. It builds on the lessons learned in past year designing efficient sequence modeling architectures. together.ai/blog/stripedhy…

Announcing StripedHyena 7B — an open source model using an architecture that goes beyond Transformers achieving faster performance and longer context.

It builds on the lessons learned in past year  designing efficient sequence modeling architectures.

together.ai/blog/stripedhy…
Manuel Brack (@mbrack_aiml) 's Twitter Profile Photo

We’re presenting SEGA today at NeurIPS which was one of the works leading up to LEDITS++. See you today at poster #542 from 10:45am-12:45pm.

We’re presenting SEGA today at NeurIPS which was one of the works leading  up to LEDITS++. 

See you today at poster #542 from 10:45am-12:45pm.
OcciGlot (@occiglot) 's Twitter Profile Photo

Today, we are announcing Occiglot! A large-scale collaborative research collective focusing on open-source European LLMs. We invite anybody working on multilingual datasets, benchmarks, or models to get in touch/join our discord. occiglot.github.io/occiglot/posts…

OcciGlot (@occiglot) 's Twitter Profile Photo

We are also releasing the first set of strong 7B models for French, German, Spanish, and Italian and a joint EU5 one. Both pre-trained and instruction-tuned variants! huggingface.co/collections/oc…

ontocord (@ontocord) 's Twitter Profile Photo

arxiv.org/abs/2404.08676 🚨Introducing ALERT🚨 , a large-scale safety benchmark of > 45k instructions categorized using our new taxonomy. ✨Using prior work such as Bo Decoding Trust and our Aurora-m huggingface.co/datasets/auror… and Jack Clark's awesome Anthropic-HH. 🙏

arxiv.org/abs/2404.08676 
🚨Introducing ALERT🚨 , a large-scale safety benchmark of &gt; 45k instructions categorized using our new taxonomy.  
✨Using prior work such as Bo Decoding Trust and  our Aurora-m huggingface.co/datasets/auror… 
and <a href="/jackclarkSF/">Jack Clark</a>'s awesome Anthropic-HH. 
🙏
Manuel Brack (@mbrack_aiml) 's Twitter Profile Photo

I'm thrilled to announce LlavaGuard, a family of VLM-based safeguard models that offers a versatile framework for evaluating the safety compliance of visual content. ml-research.github.io/human-centered…

I'm thrilled to announce LlavaGuard, a family of VLM-based safeguard models that offers a versatile framework for evaluating the safety compliance of visual content.

ml-research.github.io/human-centered…
Kristian Kersting (@kerstingaiml) 's Twitter Profile Photo

So happy about this work 👇Neurosymbolic AI requires both, continuous and discrete bindings between neuro and symbols🙏 to the team

Kristian Kersting (@kerstingaiml) 's Twitter Profile Photo

📢Forget about classical #tokenizers! Embed words via sparse activations over character triplets. (1) Similar downstream performance with fewer parameters. (2) Significant improvements in cross-lingual transfer learning. Joint work with Aleph Alpha 👉arxiv.org/abs/2406.19223

📢Forget about classical #tokenizers! Embed words via sparse activations over character triplets. (1) Similar downstream performance with fewer parameters.  (2) Significant improvements in cross-lingual transfer learning.  Joint work with <a href="/Aleph__Alpha/">Aleph Alpha</a> 

👉arxiv.org/abs/2406.19223
Paul Röttger (@paul_rottger) 's Twitter Profile Photo

Today, we are releasing MSTS, a new Multimodal Safety Test Suite for vision-language models! MSTS is exciting because it tests for safety risks *created by multimodality*. Each prompt consists of a text + image that *only in combination* reveal their full unsafe meaning. 🧵

Today, we are releasing MSTS, a new Multimodal Safety Test Suite for vision-language models!

MSTS is exciting because it tests for safety risks *created by multimodality*. Each prompt consists of a text + image that *only in combination* reveal their full unsafe meaning.

🧵
Antonia Wüst (@toniwuest) 's Twitter Profile Photo

📢 Update: We've deepened our exploration of VLMs on Bongard Problems with more rigorous evaluations! The best-performing model (o1) we tested solved 43 out of 100 problems - progress, but still plenty of room for improvement!

📢 Update: We've deepened our exploration of VLMs on Bongard Problems with more rigorous evaluations! 
The best-performing model (o1) we tested solved 43 out of 100 problems - progress, but still plenty of room for improvement!