Tianfu (Matt) Wu (@viseyeon) 's Twitter Profile
Tianfu (Matt) Wu

@viseyeon

He is an associate professor in the department of ECE at NCSU.

ID: 197323213

linkhttps://tfwu.github.io/ calendar_today01-10-2010 05:42:54

57 Tweet

290 Followers

993 Following

Tianfu (Matt) Wu (@viseyeon) 's Twitter Profile Photo

3/n In our NEAT, we leverage our previous work on self-supervised 2D wireframe parser (HAWPv3, github.com/cherubicXN/haw…) and the recently proposed VolSDF.

Tianfu (Matt) Wu (@viseyeon) 's Twitter Profile Photo

4/n We first propose a NEural Attraction (NEAT) Field representation that parameterizes the 3D line segments with MLP, enabling us to learn the 3D line segments from 2D observation without incurring any explicit feature correspondences across views.

Tianfu (Matt) Wu (@viseyeon) 's Twitter Profile Photo

5/n We then present a novel Global Junction Perceiving (GJP) module to perceive meaningful 3D junctions from the NEAT Fields of 3D line segments by optimizing a randomly initialized high-dimensional latent array and a lightweight decoding MLP.

Tianfu (Matt) Wu (@viseyeon) 's Twitter Profile Photo

6/n Benefitting from our explicit modeling of 3D junctions, we finally compute the primal sketch of 3D wireframes by attracting the queried 3D line segments to the 3D junctions, significantly simplifying the computation paradigm of 3D wireframe parsing.

Nan Xue (@nanxue7) 's Twitter Profile Photo

Excited about this challenge that aligns with our paper NEAT (github.com/cherubicXN/neat) on 3D wireframe reconstruction from multi-view images, powered by HAWPv3 (github.com/cherubicXN/hawp ). Stay tuned for the camera-ready version and code updates!

Chinmay Savadikar (@savadikarc) 's Twitter Profile Photo

We obtain strong results on Instruction Tuning, performing slightly better than representation finetuning methods with the same parameter budget, and outperforming LoRA and full finetuning at a much lower parameter budget. (2/6)

We obtain strong results on Instruction Tuning, performing slightly better than representation finetuning methods with the same parameter budget, and outperforming LoRA and full finetuning at a much lower parameter budget. (2/6)
Chinmay Savadikar (@savadikarc) 's Twitter Profile Photo

Presenting WeGeFT at #ICML25 on 17th July with Tianfu (Matt) Wu, come say hi! Paper: arxiv.org/abs/2312.00700 📍East Exhibition Hall A-B, poster #1306 ⌚️ 4:30pm - 7pm PDT

Presenting WeGeFT at #ICML25 on 17th July with <a href="/VisEyeOn/">Tianfu (Matt) Wu</a>, come say hi!

Paper: arxiv.org/abs/2312.00700
📍East Exhibition Hall A-B, poster #1306
⌚️ 4:30pm - 7pm PDT