Xing Han Lu(@xhluca) 's Twitter Profileg
Xing Han Lu

@xhluca

Tinkering with Conversational Web Agents @Mila_Quebec

ID:943571700746211328

linkhttp://xinghanlu.com calendar_today20-12-2017 19:59:58

1,5K Tweets

1,3K Followers

211 Following

need set extendedValue is not photostdClass Object ( [display_url] => pic.twitter.com/ObrjTpmyzU [expanded_url] => https://twitter.com/vaibhav_adlakha/status/1785406274273751315/photo/1 [ext_alt_text] => A plot showing the contribution of different steps of LLM2Vec recipe for Meta-Llama-3 across three different pooling choices - EOS, mean, and weighted mean pooling. Compared to the native model with unidirectional performance (Uni), just enabling bidirectional connections (Bi) leads to a drop in performance. After training with MNTP (Masked Next Token Prediction), the model outperforms Uni for mean and weighted mean. Finally, training with SimCSE leads to a strong unsupervised text encoder across all pooling methods. [id_str] => 1785405954885844992 [indices] => Array ( [0] => 258 [1] => 281 ) [media_key] => 16_1785405954885844992 [media_url_https] => https://pbs.twimg.com/tweet_video_thumb/GMcJQajWgAAKaUp.jpg [type] => animated_gif [url] => https://t.co/ObrjTpmyzU [ext_media_availability] => stdClass Object ( [status] => Available ) [sizes] => stdClass Object ( [large] => stdClass Object ( [h] => 1346 [w] => 1932 [resize] => fit ) [medium] => stdClass Object ( [h] => 836 [w] => 1200 [resize] => fit ) [small] => stdClass Object ( [h] => 474 [w] => 680 [resize] => fit ) [thumb] => stdClass Object ( [h] => 150 [w] => 150 [resize] => crop ) ) [original_info] => stdClass Object ( [height] => 1346 [width] => 1932 [focus_rects] => Array ( ) ) [video_info] => stdClass Object ( [aspect_ratio] => Array ( [0] => 966 [1] => 673 ) [variants] => Array ( [0] => stdClass Object ( [bitrate] => 0 [content_type] => video/mp4 [url] => https://video.twimg.com/tweet_video/GMcJQajWgAAKaUp.mp4 ) ) ) )