Maithri (@maithrivm) 's Twitter Profile
Maithri

@maithrivm

ID: 2440149387

calendar_today27-03-2014 15:34:01

1,1K Tweet

227 Followers

267 Following

Taskade (@taskade) 's Twitter Profile Photo

🚀 Multi-AI Agents are now live! Craft your AI team: one agent researches, another executes tasks. They write, summarize, and edit—think of it as your mini-me doubling your productivity! 🤖 What would you build? Reply with your AI Agent idea for a chance to win free SWAG! ✨🐑

Hamel Husain (@hamelhusain) 's Twitter Profile Photo

For folks looking to optimize their RAG using simple, proven approaches check out these talks, notes, slides we’ve made available for free from our course thanks to Jo Kristian Bergum Ben Clavié jason liu (More coming soon) parlance-labs.com/education/rag/

Jerry Liu (@jerryjliu0) 's Twitter Profile Photo

I thought a bit about what a “generally good” RAG pipeline looks like over complex documents (e.g. a PDF with tables/diagrams/weird layouts) 1. Parse document into a document graph of embedded objects (e.g. with LlamaParse) 2. Extract one or more text representations for each

I thought a bit about what a “generally good” RAG pipeline looks like over complex documents (e.g. a PDF with tables/diagrams/weird layouts)

1. Parse document into a document graph of embedded objects (e.g. with LlamaParse)
2. Extract one or more text representations for each
Adam Butler (@gestaltu) 's Twitter Profile Photo

Jerry Liu Agree. Document becomes structured lists of pages consisting of header, paragraph, table, figure objects. Each table and figure has an associated description drawn from the surrounding context. When building the retriever, parse paragraphs and descriptions into propositions

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

If you are looking for something to read this weekend, I am happy to share that Chapter 7 on instruction finetuning LLMs is now finally live on the Manning website: manning.com/books/build-a-… This is the longest chapter in the book and takes a from-scratch approach to implementing

If you are looking for something to read this weekend, I am happy to share that Chapter 7 on instruction finetuning LLMs is now finally live on the Manning website: manning.com/books/build-a-…

This is the longest chapter in the book and takes a from-scratch approach to implementing
Sebastian Raschka (@rasbt) 's Twitter Profile Photo

Was positively surprised by Mistral AI's Mathstral release yesterday! I couldn't help it and wanted to give it a try. So, I just ported it to LitGPT, and it works really well based on my first impressions. Overall, it's case study for small to medium-sized specialized LLMs!

Tony Wu (@tonywu_71) 's Twitter Profile Photo

Join Manuel Faysse and me for the LlamaIndex Webinar this Friday, July 26th, at 9 AM PT! We'll be discussing our latest model for document retrieval: ColPali. Don't miss out! 👋🏼

Fernando Cao (@thefernandocz) 's Twitter Profile Photo

This man can predict the future. He was an early investor in Uber, Twitter, and Notion. And he just said "In 50 years, everyone will be working for themselves." Naval Ravikant's 5 predictions on the future of wealth creation (and why you should care):

This man can predict the future.

He was an early investor in Uber, Twitter, and Notion.

And he just said "In 50 years, everyone will be working for themselves."

Naval Ravikant's 5 predictions on the future of wealth creation (and why you should care):
Jo Kristian Bergum (@jobergum) 's Twitter Profile Photo

IR and NLP benchmarks are clean preprocessed text only. The problem is that real-world IR use cases don't have the luxury of preprocessed clean text data. Vision-LLMs offer a refreshing alternative that I strongly believe will shape the future of real-world IR. What You See

IR and NLP benchmarks are clean preprocessed text only. 

The problem is that real-world IR use cases don't have the luxury of preprocessed clean text data. 

Vision-LLMs offer a refreshing alternative that I strongly believe will shape the future of real-world IR. 

What You See
Akshay 🚀 (@akshay_pachaar) 's Twitter Profile Photo

Stanford CS229: Building Large Language Models This 1.5 hours lecture provides a concise overview of building a ChatGPT-like model, covering both pretraining (language modeling) and post-training (SFT/RLHF). For each component, it explores common practices in data collection,

Stanford CS229: Building Large Language Models

This 1.5 hours lecture provides a concise overview of building a ChatGPT-like model, covering both pretraining (language modeling) and post-training (SFT/RLHF).

For each component, it explores common practices in data collection,
dex (@dexhorthy) 's Twitter Profile Photo

may have hit a nerve here...it all started with trying to understand how production AI systems actually work FOLKS - I've tried every agent framework out there and talked to many strong founders building impressive things with AI BUT I was surprised to find that most successful

may have hit a nerve here...it all started with trying to understand how production AI systems actually work

FOLKS - I've tried every agent framework out there and talked to many strong founders building impressive things with AI

BUT I was surprised to find that most successful
Jerry Liu (@jerryjliu0) 's Twitter Profile Photo

Sonnet 4.0 is cracked at document understanding. With our latest update, we’ve built a sonnet 4.0-powered agent to help convert the most complex docs into markdown, detect layouts and tables/images. The agent loop helps prevent hallucinations and join tables across multiple

Sonnet 4.0 is cracked at document understanding.

With our latest update, we’ve built a sonnet 4.0-powered agent to help convert the most complex docs into markdown, detect layouts and tables/images. The agent loop helps prevent hallucinations and join tables across multiple
jason liu - vacation mode (@jxnlco) 's Twitter Profile Photo

Why your RAG system is failing despite "great" embedding scores I just watched Kelly Hong from Chroma present their research on generative benchmarking, and it's a wake-up call for anyone building retrieval systems. The uncomfortable truth: your embedding model might be

jason liu - vacation mode (@jxnlco) 's Twitter Profile Photo

When building RAG systems, think like a data scientist not an engineer. Extract patterns, identify value, then build the simplest thing that works. I've seen teams waste months on complex architectures when the real solution was just better metadata filters.