Lakshya A Agrawal (@lakshyaaagrawal) 's Twitter Profile
Lakshya A Agrawal

@lakshyaaagrawal

AI PhD @ UC Berkeley | Past: AI4Code Research Fellow @MSFTResearch | Summer @EPFL | Maintainer of aka.ms/multilspy | Hobbyist Saxophonist

ID: 2256125125

linkhttps://lakshyaaagrawal.github.io calendar_today21-12-2013 07:13:00

620 Tweet

475 Followers

1,1K Following

DSPy (@dspyoss) 's Twitter Profile Photo

Our latest optimizer GEPA writes beautiful prompts, even with a “mini” model. Stay tuned for a lot more these coming days.

Our latest optimizer GEPA writes beautiful prompts, even with a “mini” model. Stay tuned for a lot more these coming days.
arti (@claudeusmaximus) 's Twitter Profile Photo

Matt Pocock muzz Omar Khattab Storm DSPy One of the cool prompt optimization results I saw around GEPA was they found that a prompt instructing the model that it was a StarCraft player made it outperform any other specialized prompt on math Olympiad problems

Kùzu (@kuzudb) 's Twitter Profile Photo

📣 Our next tutorial in our marimo series is out! In this one, we showcase how to use DSPy to build a composable Graph RAG pipeline that does the following: 1. Schema pruning 2. Text2Cypher 3. Answer Generation Chat with your graphs in Kùzu! youtube.com/watch?v=2aepn9…

Michel (@mike_pavlukhin) 's Twitter Profile Photo

It's time to VIBE your DSPy Now you can - transform your prompt-ideas into Signatures - refine them with your feedback - use dynamically after generation - even optimise it with MIPRO or GEPA source and examples of usage bellow 🧵

Michel (@mike_pavlukhin) 's Twitter Profile Photo

It can also be optimised by MIPRO or GEPA but I didn't try it. But maybe we can collect dataset of awesome dspy signatures to try it. What do you think? DSPy Omar Khattab Tom Dörr

Drew Breunig (@dbreunig) 's Twitter Profile Photo

I got around to kicking the tires on GEPA prompt optimization in DSPy, seeing if it could match the reported gsm8k benchmark for Qwen3-4b-thinking. Started with the simplest signature: qa_bot = dspy.Predict('question -> answer') GEPA got it from 67.2% to 92.8%.

Shashikant Jagtap🏴󠁧󠁢󠁥󠁮󠁧󠁿 (@shashikant86) 's Twitter Profile Photo

Drew Breunig Performed some experiment last week using GEPA with DSPy using llama3.1:8b and Qwen3:8b also locally hosted gpt-oss:20/120b on M4 Max 128GB RAM. Source : github.com/SuperagenticAI… Blog: super-agentic.ai/resources/supe… Code is open open source but uses SuperOptiX (proprietary) framework.

Kevin Madura (@kmad) 's Twitter Profile Photo

Sudhir Gajre Drew Breunig DSPy Not as simple an example as Drew’s but the docs show how to use it below. Welcome to the world of DSPy ! dspy.ai/tutorials/gepa…

Deep Learning Weekly (@dl_weekly) 's Twitter Profile Photo

🤖 From this week's issue: a paper on GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning arxiv.org/abs/2507.19457

Gabriel Lespérance (@gablesperance) 's Twitter Profile Photo

3 | Your choice of program structure is a moat I’ve prev. posted about this x.com/GabLesperance/… but meaningful defensible advantage comes from your unique composition of models and logic, the proprietary datasets that fuel them, & the continuous optimizations that refine them

3 | Your choice of program structure is a moat
I’ve prev. posted about this x.com/GabLesperance/…
but meaningful defensible advantage comes from your unique composition of models and logic, the proprietary datasets that fuel them, & the continuous optimizations that refine them
Gabriel Lespérance (@gablesperance) 's Twitter Profile Photo

4| Startups should ignore FT / RL in favour of LM driven prompt opt. (unless proven otherwise) Prompt optimization provides a faster learning, higher accuracy, cheaper, and more data-efficient solution Don't believe the hype, focus on how fast you can iterate per $ and FTE spent

4| Startups should ignore FT / RL in favour of LM driven prompt opt. (unless proven otherwise)

Prompt optimization provides a faster learning, higher accuracy, cheaper, and more data-efficient solution
Don't believe the hype, focus on how fast you can iterate per $ and FTE spent