James Harrison (@jmes_harrison) 's Twitter Profile
James Harrison

@jmes_harrison

Cyberneticist @GoogleDeepMind

ID: 2818017140

linkhttp://web.stanford.edu/~jh2 calendar_today18-09-2014 21:45:55

81 Tweet

1,1K Followers

730 Following

Boris Ivanovic (@iamborisi) 's Twitter Profile Photo

Happy to share that our latest work on adaptive behavior prediction models with James Harrison Google AI and Marco Pavone NVIDIA AI has been accepted to #ICRA2023! 📜: arxiv.org/abs/2209.11820 We've also recently released the code and trained models at github.com/NVlabs/adaptiv…!!

James Harrison (@jmes_harrison) 's Twitter Profile Photo

Graph deep learning and bi-level RL seem to work exceptionally well for a whole bunch of critically important real-world problems like supply chain control. Plus, it easily combines with standard linear programming planners in OR. Check out Daniele Gammelli's thread for info!

Daniele Gammelli (@danielegammelli) 's Twitter Profile Photo

Looking forward to getting started at #ICML! Happy to chat about RL, learning-based control, and Graph ML. Make sure to drop by our poster! (Wed 26 Jul 2 p.m. PDT)

Looking forward to getting started at #ICML! Happy to chat about RL, learning-based control, and Graph ML. 

Make sure to drop by our poster! (Wed 26 Jul 2 p.m. PDT)
Oscar Li (@oscarli101) 's Twitter Profile Photo

📝Quiz time: when you have an unrolled computation graph (see figure below), how would you compute the unrolling parameters' gradients? If your answer only contains Backprop, now it’s time to add a new method to your gradient estimation toolbox!

📝Quiz time: when you have an unrolled computation graph (see figure below), how would you compute the unrolling parameters' gradients?

If your answer only contains Backprop, now it’s time to add a new method to your gradient estimation toolbox!
James Harrison (@jmes_harrison) 's Twitter Profile Photo

A question we have been thinking about for a long time: what is the natural architecture for a learned optimizer? We now have an important part of the answer---we can automatically construct expressive optimizers based on optimizee network symmetries. Check out Allan's thread!

Gioele Zardini (@gioelezardini) 's Twitter Profile Photo

Going to IEEE Intelligent Transportation Systems Society ITSC'24? Check our tutorial on Data-driven Methods for Network-level Coordination of AMoD Systems Organized with Daniele Gammelli, Luigi Tresca, Carolin Schmidt, James Harrison, Filipe Rodrigues, Maximilian Schiffer, Marco Pavone rl4amod-itsc24.github.io

Benjamin Thérien (@benjamintherien) 's Twitter Profile Photo

Are you still using hand-designed optimizers? Tomorrow morning, I’ll explain how we can meta-train learned optimizers that generalize to large unseen tasks! Don't miss my talk at OPT-2024, Sun 15 Dec 11:15-11:30 a.m. PST, West Ballroom A! x.com/benjamintherie…

rdyro (@rdyro128523) 's Twitter Profile Photo

Deepseek R1 inference in pure JAX! Currently on TPU, with GPU and distilled models in-progress. Features MLA-style attention, expert/tensor parallelism & int8 quantization. Contributions welcome!

Deepseek R1 inference in pure JAX! Currently on TPU, with GPU and distilled models in-progress. Features MLA-style attention, expert/tensor parallelism & int8 quantization. Contributions welcome!