Shenghao Yang (@shenghao_yang) 's Twitter Profile
Shenghao Yang

@shenghao_yang

PhD student @UWCheritonCS. Machine learning and optimization over graphs. opallab.ca

ID: 1361877766233337858

linkhttps://cs.uwaterloo.ca/~s286yang calendar_today17-02-2021 03:19:07

47 Tweet

170 Takipçi

71 Takip Edilen

Hannes Stärk (@hannesstaerk) 's Twitter Profile Photo

New video with Prof. Kimon Fountoulakis explaining his paper "Graph Attention Retrospective" is now available! youtu.be/duWVNO8_sDM Check it out to learn what GATs can and cannot learn for node classification in a stochastic block model setting!

Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Does it matter where you place the graph convolutions (GCs) in a deep network? How much better is a deep GCN vs an MLP? When are 2 or 3 GCs better than 1 GC? We answer those for node class., and a nonlinearly separable contextual stochastic block model. arxiv.org/pdf/2204.09297….

Does it matter where you place the graph convolutions (GCs) in a deep network? How much better is a deep GCN vs an MLP? When are 2 or 3 GCs better than 1 GC? We answer those for node class., and a nonlinearly separable contextual stochastic block model. arxiv.org/pdf/2204.09297….
SIAM ACDA (@siam_acda) 's Twitter Profile Photo

SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23), May 31 -- June 2, 2023 siam.org/conferences/cm… Important dates: Short Abstract and Submission Registration: Jan 9, 2023 Papers and Presentations-without-papers: Jan 16, 2023 #SIAMACDA23

SIAM ACDA (@siam_acda) 's Twitter Profile Photo

SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23) May 31 – June 2, 2023 Seattle, Washington, U.S. New submission due dates: Registering a submission: Jan 16; Paper submission deadline; Jan 23.

Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Alright, I have some important news (at least for me). Now there exists an accelerated personalized PageRank method which is strongly local!! It's running time does not depend on the size of the graph but rather only on the number of nonzeros at uwspace.uwaterloo.ca/handle/10012/1…

Alright, I have some important news (at least for me). Now there exists an accelerated personalized PageRank method which is strongly local!! It's running time does not depend on the size of the graph but rather only on the number of nonzeros at uwspace.uwaterloo.ca/handle/10012/1…
Aseem Baranwal (@aseemrb) 's Twitter Profile Photo

Here's our new work on the optimality of message-passing architectures for node classification on sparse feature-decorated graphs! Thanks to my advisors and co-authors Kimon Fountoulakis and Aukosh Jagannath. Details within the quoted tweet.

Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Graph Attention Retrospective is live at JMLR jmlr.org/papers/v24/22-…. The revised version has additional results: 1) Beyond perfect node classification, we provide a positive result on graph attention’s robustness against structural noise in the graph. In particular, our

SIAM (@thesiamnews) 's Twitter Profile Photo

The November issue of SIAM News is now available! In this month's edition, Nate Veldt finds that even a seemingly minor generalization of the standard #hypergraph cut penalty yields a rich space of theoretical questions and #complexity results. Check it out! sinews.siam.org/Details-Page/g…

The November issue of SIAM News is now available! In this month's edition, <a href="/n_veldt/">Nate Veldt</a> finds that even a seemingly minor generalization of the standard #hypergraph cut penalty yields a rich space of theoretical questions and #complexity results. Check it out! sinews.siam.org/Details-Page/g…
Lenka Zdeborova (@zdeborova) 's Twitter Profile Photo

Emergence in LLMs is a mystery. Emergence in physics is linked to phase transitions. We identify a phase transition between semantic and positional learning in a toy model of dot-product attention. Very excited about this one! arxiv.org/pdf/2402.03902…

Emergence in LLMs is a mystery. Emergence in physics is linked to phase transitions. We identify a phase transition between semantic and positional learning in a toy model of dot-product attention. Very excited about this one! arxiv.org/pdf/2402.03902…
Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

.Artur is at ICLR and he will present his joint work with Shenghao Yang on "Local Graph Clustering with Noisy Labels". Date: Friday 10th of May. Time: 4:30pm - 6:30pm CEST. Place: Halle B #175.

.<a href="/backdeluca/">Artur</a> is at ICLR and he will present his joint work with <a href="/shenghao_yang/">Shenghao Yang</a> on "Local Graph Clustering with Noisy Labels". Date: Friday 10th of May. Time: 4:30pm - 6:30pm CEST. Place: Halle B #175.
Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Paper: Analysis of Corrected Graph Convolutions We study the performance of a vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. 1) We perform a spectral analysis for k rounds of corrected graph convolutions, and we provide results

Paper: Analysis of Corrected Graph Convolutions

We study the performance of a vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. 

1) We perform a spectral analysis for k rounds of corrected graph convolutions, and we provide results
Artur (@backdeluca) 's Twitter Profile Photo

For those participating in the Complex Networks in Banking and Finance Workshop, I’ll be presenting our work on Local Graph Clustering with Noisy Labels tomorrow at 9:20 AM EDT at the Fields Institute. Hope to see you there :) arxiv.org/abs/2310.08031

Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

I wrote a blog Medium on "Random Data and Graph Neural Networks" Link: medium.com/@kimon.fountou… I cover a range of topics: 1. How a single averaging graph convolution changes the mean and variance of the data. 2. How it improves linear classification. 3. How multiple

Petar Veličković (@petarv_93) 's Twitter Profile Photo

"Energy continuously flows from being concentrated, to becoming dispersed, spread out, wasted and useless." ⚡➡️🌬️ Sharing our work on the inability of softmax in Transformers to _robustly_ learn sharp functions out-of-distribution. Together w/ Christos Perivolaropoulos Federico Barbero & Razvan!

"Energy continuously flows from being concentrated, to becoming dispersed, spread out, wasted and useless." ⚡➡️🌬️

Sharing our work on the inability of softmax in Transformers to _robustly_ learn sharp functions out-of-distribution.

Together w/ <a href="/cperivol_/">Christos Perivolaropoulos</a> <a href="/fedzbar/">Federico Barbero</a> &amp; Razvan!
Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning We propose calculating the attention weights in Transformers using only fixed positional encodings (referred to as positional attention). These positional encodings remain

Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning

We propose calculating the attention weights in Transformers using only fixed positional encodings (referred to as positional attention). These positional encodings remain
Aseem Baranwal (@aseemrb) 's Twitter Profile Photo

My PhD thesis is now available on UWspace: uwspace.uwaterloo.ca/items/291d10bc…. Thanks to my advisors Kimon Fountoulakis and Aukosh Jagannath for their support throughout my PhD. We introduce a statistical perspective for node classification problems. Brief details are below.

Yuandong Tian (@tydsh) 's Twitter Profile Photo

Our new work Spectral Journey arxiv.org/abs/2502.08794 shows a surprising finding: when a 2-layer Transformer is learned to predict the shortest path of a given graph, 1️⃣it first implicitly computes the spectral embedding for each edge, i.e. eigenvectors of Normalized Graph

Kimon Fountoulakis (@kfountou) 's Twitter Profile Photo

Positional Attention is accepted at ICML 2025! Thanks to all co-authors for the hard work (64 pages). If you’d like to read the paper, check the quoted post. That's a comprehensive study on the expressivity for parallel algorithms, their in- and out-of-distribution learnability,

VITA Group (@vitagrouput) 's Twitter Profile Photo

🎉 Huge congratulations to PhD student Peihao Wang (Peihao Wang ) on two major honors: 🏆 2025 Google PhD Fellowship in Machine Learning & ML Foundations 🌟 Stanford Rising Star in Data Science Incredibly proud of Peihao's outstanding achievements! 🔶⚡

🎉 Huge congratulations to PhD student Peihao Wang (<a href="/peihao_wang/">Peihao Wang</a> ) on two major honors:
🏆 2025 Google PhD Fellowship in Machine Learning &amp; ML Foundations 
🌟 Stanford Rising Star in Data Science
Incredibly proud of Peihao's outstanding achievements! 🔶⚡