Umar Ateeq (@umarateeq_) 's Twitter Profile
Umar Ateeq

@umarateeq_

Co-Founder Virid Future || NLP Engineer || Data Scientist

ID: 1275185630411137026

linkhttps://www.linkedin.com/in/umarateeq calendar_today22-06-2020 21:55:33

23 Tweet

32 Followers

100 Following

Sanyam Bhutani (@bhutanisanyam1) 's Twitter Profile Photo

LLM Agents Roadmap! 🙏 The most detailed roadmap capturing *all* of Large Language Model agents research The table has some nice tags, start with the open source ones: github.com/Paitesanshi/LL…

LLM Agents Roadmap! 🙏

The most detailed roadmap capturing *all* of Large Language Model agents research

The table has some nice tags, start with the open source ones:

github.com/Paitesanshi/LL…
Sanyam Bhutani (@bhutanisanyam1) 's Twitter Profile Photo

AutoAgents: Autonomously generate LLM agents for any goal! 🤖 This tries to solve the need for strong prompting and role definition by autogenerating agents The code is sparsely documented but readable: github.com/LinkSoul-AI/Au…

Umar Ateeq (@umarateeq_) 's Twitter Profile Photo

This is huge! Elon Musk's AI startup xAI just raised $6 billion at a $24B valuation to challenge OpenAI and largest round in History 🤯 The historic Series B round was led by Andreessen Horowitz, and Sequoia Capital, among other top VCs AI is just getting started. Massive.

This is huge! Elon Musk's AI startup xAI just raised $6 billion at a $24B valuation to challenge OpenAI and largest round in History 🤯

The historic Series B round was led by Andreessen Horowitz, and Sequoia Capital, among other top VCs

AI is just getting started.

Massive.
Jason Weston (@jaseweston) 's Twitter Profile Photo

🚨 Contextual Position Encoding (CoPE) 🚨 Context matters! CoPE is a new positional encoding method for transformers that takes into account *context*. - Can "count" distances per head dependent on need, e.g. i-th sentence or paragraph, words, verbs, etc. Not just tokens. -

🚨 Contextual Position Encoding (CoPE) 🚨

Context matters!  CoPE is a new positional encoding method for transformers that takes into account *context*.
- Can "count" distances per head dependent on need, e.g. i-th sentence or paragraph, words, verbs, etc. Not just tokens.
-
Umar Ateeq (@umarateeq_) 's Twitter Profile Photo

I just published my latest article where I implemented the "Attention is All You Need" research paper using PyTorch Dive into the world of Transformer models and discover how attention mechanisms works. Check it out for detailed code and insights! medium.com/@UmarAteeq/att…

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

OpenVLA: An Open-Source Vision-Language-Action Model abs: arxiv.org/abs/2406.09246 project page: openvla.github.io code: github.com/openvla/openvla Presents OpenVLA, a 7B param open-source vision-language-action model finetuned from Llama-2 combined with a visual encoder that

OpenVLA: An Open-Source Vision-Language-Action Model

abs: arxiv.org/abs/2406.09246
project page: openvla.github.io
code: github.com/openvla/openvla

Presents OpenVLA, a 7B param open-source vision-language-action model finetuned from Llama-2 combined with a visual encoder that
Umar Ateeq (@umarateeq_) 's Twitter Profile Photo

The world’s most powerful supercomputers couldn’t complete certain calculations in 10 septillion years (that's longer than the entire universe's age!). But Google’s new Willow Chip can finish them in under 5 minutes. Google continues to push the boundaries of innovation.