Wenqi Shaw (@shaowenqi126301) 's Twitter Profile
Wenqi Shaw

@shaowenqi126301

ID: 1668138681562177537

calendar_today12-06-2023 06:10:43

6 Tweet

7 Takipçi

21 Takip Edilen

AK (@_akhaliq) 's Twitter Profile Photo

Tiny LVLM-eHub: Early Multimodal Experiments with Bard paper page: huggingface.co/papers/2308.03… Recent advancements in Large Vision-Language Models (LVLMs) have demonstrated significant progress in tackling complex multimodal tasks. Among these cutting-edge developments, Google's

Tiny LVLM-eHub: Early Multimodal Experiments with Bard

paper page: huggingface.co/papers/2308.03…

Recent advancements in Large Vision-Language Models (LVLMs) have demonstrated significant progress in tackling complex multimodal tasks. Among these cutting-edge developments, Google's
OpenGVLab (@opengvlab) 's Twitter Profile Photo

Thank AK AK for the post. 🔥 Excited to introduce OmniQuant - An advanced open-source algorithm for compressing large language models! 📜 Paper: arxiv.org/abs/2308.13137 🔗 Code: github.com/OpenGVLab/Omni… 💡 Key Features: 🚀Omnidirectional Calibration: Enables easier weight

Thank AK <a href="/_akhaliq/">AK</a>  for the post.
🔥 Excited to introduce OmniQuant - An advanced open-source algorithm for compressing large language models!
📜 Paper: arxiv.org/abs/2308.13137
🔗 Code: github.com/OpenGVLab/Omni…
💡 Key Features:
🚀Omnidirectional Calibration: Enables easier weight
AK (@_akhaliq) 's Twitter Profile Photo

ImageBind-LLM: Multi-modality Instruction Tuning paper page: huggingface.co/papers/2309.03… present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different

ImageBind-LLM: Multi-modality Instruction Tuning

paper page: huggingface.co/papers/2309.03…

present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different
Pan Lu (@lupantech) 's Twitter Profile Photo

🔥 Introducing #SPHINX 🦁: an all-in-one multimodal LLM with a unified interface that seamlessly integrates domains, tasks, & embeddings. 🧵N 👋 Explore the Gradio demo AK: imagebind-llm.opengvlab.com Dive into the open resources! 🤗 Model Hugging Face:

🔥 Introducing #SPHINX 🦁: an all-in-one multimodal LLM with a unified interface that seamlessly integrates domains, tasks, &amp; embeddings. 🧵N

👋 Explore the <a href="/Gradio/">Gradio</a> demo <a href="/_akhaliq/">AK</a>: imagebind-llm.opengvlab.com

Dive into the open resources!
🤗 Model <a href="/huggingface/">Hugging Face</a>:
Victor.Kai Wang (@victorkaiwang1) 's Twitter Profile Photo

Generating ~200 million parameters in just minutes! 🥳 Excited to share our work with Doven Tang , ZHAO WANGBO , and Yang You: 'Recurrent Diffusion for Large-Scale Parameter Generation' (RPG for short). Example: Obtain customized models using prompts (see below). (🧵1/8)