Amr Abdullatif (@__amrabdullatif) 's Twitter Profile
Amr Abdullatif

@__amrabdullatif

Assistant Professor in Computer Science @UniofBradford. Machine learning researcher @BHGECO. PhD @UniGenova. Researcher at @ScuolaSantAnna. Opinions are my own.

ID: 1102944776222052352

linkhttps://www.linkedin.com/in/amrabdullatif/ calendar_today05-03-2019 14:51:41

76 Tweet

33 Followers

188 Following

Yann LeCun (@ylecun) 's Twitter Profile Photo

By telling scientists they must publish, you get: 1. higher-quality research, more reliable results, less self-delusion 2. better scientists whose reputation will flourish 3. easier external collaborations 4. better research evaluation 5. better internal impact 6. prestige

hardmaru (@hardmaru) 's Twitter Profile Photo

New paper from IDSIA motivated by building an artificial scientist with World Models! A key idea is to get controller C to generate pure thought experiments in form of *weight matrices* of RNNs that still surprise the world model M, and then updating it. arxiv.org/abs/2212.14374

New paper from IDSIA motivated by building an artificial scientist with World Models!

A key idea is to get controller C to generate pure thought experiments in form of *weight matrices* of RNNs that still surprise the world model M, and then updating it.

arxiv.org/abs/2212.14374
OpenAI (@openai) 's Twitter Profile Photo

We’re developing a new tool to help distinguish between AI-written and human-written text. We’re releasing an initial version to collect feedback and hope to share improved methods in the future. openai.com/blog/new-ai-cl…

Siqi Chen (@blader) 's Twitter Profile Photo

okay so AI can literally read our minds now. a team from osaka was able to reconstruct visual images from mri scan data using stable diffusion. first row is the image presented to the test subject, second row is the reconstructed image from mri data. wild.

okay so AI can literally read our minds now.

a team from osaka was able to reconstruct visual images from mri scan data using stable diffusion.

first row is the image presented to the test subject, second row is the reconstructed image from mri data.

wild.
Yann LeCun (@ylecun) 's Twitter Profile Photo

ConvNets are a decent model of how the ventral pathway of the human visual cortex works. But LLMs don't seem to be a good model of how humans process language. There longer-term prediction taking place in the brain. Awesome work by the Brain-AI group at FAIR-Paris.

OpenAI (@openai) 's Twitter Profile Photo

Announcing GPT-4, a large multimodal model, with our best-ever results on capabilities and alignment: openai.com/product/gpt-4

Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Very excited to share some personal news! Jonathan Whitaker Pedro Cuenca apolinario 🌐 and I are writing a book with @oreilly about generative ML🤗 We'll cover many topics from theory and practical aspects, discuss creative applications, and more! What topics would you like to see?

Very excited to share some personal news! <a href="/johnowhitaker/">Jonathan Whitaker</a> <a href="/pcuenq/">Pedro Cuenca</a> <a href="/multimodalart/">apolinario 🌐</a> and I are writing a book with @oreilly about generative ML🤗

We'll cover many topics from theory and practical aspects, discuss creative applications, and more! 

What topics would you like to see?
François Chollet (@fchollet) 's Twitter Profile Photo

TensorFlow 2.12 and Keras 2.12 were released yesterday. Check out the release notes: github.com/tensorflow/ten… Many improvements in Keras, but in particular our new native saving format and the new FeatureSpace all-in-one structured data preprocessing utility

Yann LeCun (@ylecun) 's Twitter Profile Photo

1970s: Let's scare the heck out of people about nuclear energy, so that instead of zero-emission power plants we'll use lung-darkening, climate-warming coal and oil plants, killing millions in the process. 2020s: Let's scare the heck out of people about AI, so that instead of

François Chollet (@fchollet) 's Twitter Profile Photo

We're launching Keras Core, a new library that brings the Keras API to JAX and PyTorch in addition to TensorFlow. It enables you to write cross-framework deep learning components and to benefit from the best that each framework has to offer. Read more: keras.io/keras_core/ann…

We're launching Keras Core, a new library that brings the Keras API to JAX and PyTorch in addition to TensorFlow.

It enables you to write cross-framework deep learning components and to benefit from the best that each framework has to offer.

Read more: keras.io/keras_core/ann…
François Chollet (@fchollet) 's Twitter Profile Photo

Yes, in fact, we already have YOLOv8 (pretained, fine-tunable and trainable) running on TF, JAX, and PyTorch today. Try it out in KerasCV!

DeepSpeed (@deepspeedai) 's Twitter Profile Photo

DeepSpeed v0.10.0 release! Includes our ZeRO++ release, H100 support, and many bug fixes/updates. Special thanks to our wonderful community of contributors! ZeRO++ paper: arxiv.org/pdf/2306.10209… ZeRO++ blog: microsoft.com/en-us/research… v0.10.0 details: github.com/microsoft/Deep…

Soumith Chintala (@soumithchintala) 's Twitter Profile Photo

No More GIL! the Python team has officially accepted the proposal. Congrats Sam Gross on his multi-year brilliant effort to remove the GIL, and a heartfelt thanks to the Python Steering Council and Core team for a thoughtful plan to make this a reality. discuss.python.org/t/a-steering-c…

Surya Ganguli (@suryaganguli) 's Twitter Profile Photo

1/Our paper Neuron "Interpreting the retinal code for natural scenes" develops explainable AI (#XAI) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over >2 decades sciencedirect.com/science/articl…

1/Our paper <a href="/NeuroCellPress/">Neuron</a> "Interpreting the retinal code for natural scenes" develops explainable AI (#XAI) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over &gt;2 decades sciencedirect.com/science/articl…
Yann LeCun (@ylecun) 's Twitter Profile Photo

Very interesting paper: using generative AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than doing it manually or with the help of a computer. arxiv.org/abs/2303.06219

Very interesting paper: using generative AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than doing it manually or with the help of a computer.

arxiv.org/abs/2303.06219
Geoffrey Hinton (@geoffreyhinton) 's Twitter Profile Photo

New paper: managing-ai-risks.com Companies are planning to train models with 100x more computation than today’s state of the art, within 18 months. No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.

Brian Roemmele (@brianroemmele) 's Twitter Profile Photo

This is perhaps one of the most important charts on AI for 2024. It was built by the amazing researcher team at Cathie Wood’s ARK Invest. We can see the rise of open source local models are on the path to overtake massive (and expensive) cloud based closed models.

This is perhaps one of the most important charts on AI for 2024.

It was built by the amazing researcher team at <a href="/CathieDWood/">Cathie Wood</a>’s <a href="/ARKInvest/">ARK Invest</a>.

We can see the rise of open source local models are on the path to overtake massive (and expensive) cloud based closed models.
Alvaro Cintas (@dr_cintas) 's Twitter Profile Photo

What a wild week in AI 🤯 - Google AI Agents - Meta Llama 4 models - AI 2027 forecast report - Amazon AI Voice model - Gemini 2.5 Deep Research - ChatGPT memory upgrade - Firebase Studio rivals Cursor - Nvidia/Stanford 1-min AI cartoons Here’s everything you need to know:

VraserX e/acc (@vraserx) 's Twitter Profile Photo

A 7 million parameter model from Samsung just outperformed DeepSeek-R1, Gemini 2.5 Pro, and o3-mini on reasoning benchmarks like ARC-AGI. Let that sink in. It’s 10,000x smaller yet smarter. The secret is recursion. Instead of brute-forcing answers like giant LLMs, it drafts a

A 7 million parameter model from Samsung just outperformed DeepSeek-R1, Gemini 2.5 Pro, and o3-mini on reasoning benchmarks like ARC-AGI.

Let that sink in.
It’s 10,000x smaller yet smarter.

The secret is recursion.
Instead of brute-forcing answers like giant LLMs, it drafts a