Charlie Lidbury (@charlielidbury) 's Twitter Profile
Charlie Lidbury

@charlielidbury

Building symbolica.ai/agentica, writing charlielidbury.substack.com

ID: 1788289900057427968

calendar_today08-05-2024 19:28:43

43 Tweet

63 Followers

151 Following

alphaXiv (@askalphaxiv) 's Twitter Profile Photo

2026 is the year of continual learning And we are getting some amazing papers towards that This paper introduces Self-Distillation Fine-Tuning (SDFT): on-policy continual learning from expert demonstrations, with no explicit reward inference or engineering The trick here is:

2026 is the year of continual learning

And we are getting some amazing papers towards that

This paper introduces Self-Distillation Fine-Tuning (SDFT): on-policy continual learning from expert demonstrations, with no explicit reward inference or engineering

The trick here is:
Peter Steinberger (@steipete) 's Twitter Profile Photo

I'm joining OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open, independent, and just getting started.🦞 steipete.me/posts/2026/ope…

Charlie Lidbury (@charlielidbury) 's Twitter Profile Photo

Such a great use case I’ve been doing the same but for analysing benchmark traces to get a swarm of agents to summarise “what difficulties did the agents encounter with their execution environment” In my case the agent had to hierarchically merge the reports because all 120

Charlie Lidbury (@charlielidbury) 's Twitter Profile Photo

See symbolica.ai/agentica for an open source, model agnostic version of this where you can inject your own Python objects and modules into the repl (and it’s done securely with wasm, not heavyweight containers)

Taelin (@victortaelin) 's Twitter Profile Photo

continual learning, only continual learning, and nothing other than continual learning, is what's missing right now I couldn't care less about saturating benchmarks, getting +3% in SWE Bench or whatever will not make these tools much better than they are, for as long as they