Rezwan (@rezwan249) 's Twitter Profile
Rezwan

@rezwan249

ECE Student at the University of Waterloo (UW), Canada.

ID: 3001537670

linkhttps://rezwanh001.github.io/ calendar_today29-01-2015 15:18:49

12 Tweet

43 Followers

318 Following

Ian Goodfellow (@goodfellow_ian) 's Twitter Profile Photo

Neural networks are notoriously hard to debug. augustus odena has developed a new debugging methodology by adapting traditional coverage guided fuzzing techniques to neural networks.

Neural networks are notoriously hard to debug. <a href="/gstsdn/">augustus odena</a> has developed a new debugging methodology by adapting traditional coverage guided fuzzing techniques to neural networks.
Sara Hooker (@sarahookr) 's Twitter Profile Photo

What does a pruned deep neural network "forget"? Very excited to share our recent work w Aaron Courville, Yann Dauphin and @DreFrome weightpruningdamage.github.io

What does a pruned deep neural network "forget"?

Very excited to share our recent work w Aaron Courville, Yann Dauphin and  @DreFrome

weightpruningdamage.github.io
Rezwan (@rezwan249) 's Twitter Profile Photo

A Novel Technique for Non-Invasive Measurement of Human Blood Component Levels From Fingertip Video Using DNN Based Models ieeexplore.ieee.org/document/93350…

Rezwan (@rezwan249) 's Twitter Profile Photo

Corrections to “A Novel Technique for Non-Invasive Measurement of Human Blood Component Levels From Fingertip… disq.us/t/3y4fhxr

Rezwan (@rezwan249) 's Twitter Profile Photo

Bangla Unicode Normalizer developed at Bengali.AI GitHub: github.com/mnansary/bnUni… PyPI: pypi.org/project/bnunic…

Bengali.AI (@bengali_ai) 's Twitter Profile Photo

Grapheme Parser for indic languages. Available languages: Bangla, Malyalam, Tamil, Gujrati, Panjabi, Odiya, Hindi. pypi.org/project/indicp…

Jürgen Schmidhuber (@schmidhuberai) 's Twitter Profile Photo

2025 update: A Nobel Prize for Plagiarism (Technical Report IDSIA-24-24). Sadly, the 2024 Nobel Prize in Physics awarded to Hopfield & Hinton is effectively a prize for plagiarism. They republished foundational methodologies for artificial neural networks developed by Ivakhnenko,

2025 update: A Nobel Prize for Plagiarism (Technical Report IDSIA-24-24). Sadly, the 2024 Nobel Prize in Physics awarded to Hopfield &amp; Hinton is effectively a prize for plagiarism. They republished foundational methodologies for artificial neural networks developed by Ivakhnenko,
elvis (@omarsar0) 's Twitter Profile Photo

Why does RL work for enhancing agentic reasoning? This paper studies what actually works when using RL to improve tool-using LLM agents, across three axes: data, algorithm, and reasoning mode. Instead of chasing bigger models or fancy algorithms, the authors find that real,

Why does RL work for enhancing agentic reasoning?

This paper studies what actually works when using RL to improve tool-using LLM agents, across three axes: data, algorithm, and reasoning mode.

Instead of chasing bigger models or fancy algorithms, the authors find that real,
LangChain (@langchainai) 's Twitter Profile Photo

🤖deepagents: the open source, multi-model agent harness We're releasing 0.2 of deep agents, with a big addition: a "backend" abstraction This lets you swap the filesystem you use from a local filesystem to a remote VM to a database to anything blog: blog.langchain.com/doubling-down-…

🤖deepagents: the open source, multi-model agent harness

We're releasing 0.2 of deep agents, with a big addition: a "backend" abstraction

This lets you swap the filesystem you use from a local filesystem to a remote VM to a database to anything

blog: blog.langchain.com/doubling-down-…
elvis (@omarsar0) 's Twitter Profile Photo

MIT researchers propose Recursive Language Models You are going to hear more on this in 2026. Why does it matter? What if LLMs could process inputs 100x longer than their context window? Context length is a hard constraint. You can extend it with architectural changes, but

MIT researchers propose Recursive Language Models

You are going to hear more on this in 2026.

Why does it matter?

What if LLMs could process inputs 100x longer than their context window?

Context length is a hard constraint.

You can extend it with architectural changes, but
Ian Goodfellow (@goodfellow_ian) 's Twitter Profile Photo

This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations. arxiv.org/pdf/1805.07894…

This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations. arxiv.org/pdf/1805.07894…