MahaswetaChakr7 (@mahaswetachakr7) 's Twitter Profile
MahaswetaChakr7

@mahaswetachakr7

PhDing in Computational Social Science @ UCD |Governing Open Source AI | Applied NLP

Scholar: scholar.google.com/citations?user…

ID: 1132425222873800705

calendar_today25-05-2019 23:16:27

46 Tweet

61 Takipçi

662 Takip Edilen

Tom Hosking (@tomhosking) 's Twitter Profile Photo

"Human Feedback is not Gold Standard" was accepted at ICLR 2024 🥳 I'd love to chat about the limits of human feedback wrt LLM alignment (and about cohere) if you're going to be at the conference! 🇦🇹 Thanks again to Max Bartolo for making it an awesome internship experience ❤️

Zachary Lipton (@zacharylipton) 's Twitter Profile Photo

Alignment is now defined so broadly that all of AI, all of ML, and the entire history of technology is—and always has been—"alignment research".

Upol Ehsan (@upolehsan) 's Twitter Profile Photo

CS Theory prof during an on-site interview: so Upol, why do you think HCI is part of CS? Me: I'd argue you cannot have CS without HCI Him: How? Me: Name one computing system in this planet or even in space that works in a vacuum without human interactions Him: (ponders for a few

Swaroop Mishra (@swarooprm7) 's Twitter Profile Photo

Let's spread positivity. Don't want to see more Reflection posts in my timeline. Such high numbers on gsm8k (beyond annotation noise) and other benchmarks will always demand more scrutiny. Hopefully people learn to share social media model releases more carefully now.

Eugene Yan (@eugeneyan) 's Twitter Profile Photo

every time you: • say a model is "highly accurate" but have no evals • finetune a decoder LLM for classification without trying BERT-style classifiers • use only embeddings without trying text for retrieval, matching, dedup • optimize for sota/complexity instead of solving

Rohit (@rohitrango) 's Twitter Profile Photo

one key to combat this problem is to look beyond the hype and not work on transient stuff as an academic. this is much easier said than done, and im guilty of chasing the hype train a bunch of times myself (its so fun!) but to truly do research and not feel anxious about it, I

Valerio Capraro (@valeriocapraro) 's Twitter Profile Photo

Many people believe that AI advances will dramatically increase inequality. In a paper with two Nobel laureates, Daron Acemoglu and Simon Johnson, plus 30 multidisciplinary experts, we argue that it’s more complex than a simple “rich-get-richer” story. For example, we coined

Many people believe that AI advances will dramatically increase inequality.

In a paper with two Nobel laureates, Daron Acemoglu and Simon Johnson, plus 30 multidisciplinary experts, we argue that it’s more complex than a simple “rich-get-richer” story.

For example, we coined
iseeaswell꩜bʂky (@iseeaswell) 's Twitter Profile Photo

Google Translate now as Inuktut (Latin and Syllabics), Crimean Tatar (Latin script), Santali (Ol Chiki), Tshiluba, and French (Canada)!

Kate M (@_kate_morrison_) 's Twitter Profile Photo

hot take to debate🧐: "human-AI collaboration" studies are starting to seem more like "human supervision of AI" studies and less like "humans augmented by AI"

ASE 2024 (@ase_conf) 's Twitter Profile Photo

ASE 2024 has officially begun! 🎉 Dive into five days packed with inspiring sessions, hands-on workshops, and opportunities to connect with brilliant minds from around the world. Don’t miss out—join us for an unforgettable experience!

ASE 2024 has officially begun! 🎉 Dive into five days packed with inspiring sessions, hands-on workshops, and opportunities to connect with brilliant minds from around the world. Don’t miss out—join us for an unforgettable experience!
Karan Goel (@krandiash) 's Twitter Profile Photo

the transformer is good for big tech because it means the only way to win in AI is to have the biggest data center the bitter lesson matters, but yelling about it with suboptimal asymptotics is bad… the incumbents aren’t incentivized to change this, which is where we come in

Mark Riedl (@mark_riedl) 's Twitter Profile Photo

Open Source Initiative (OSI) says AI models aren’t “open source” unless data, weights, hyperparameters, and executable code to build and run the model are released

Naima Day 🌹 (@akbarjenkins) 's Twitter Profile Photo

It feels like US academia, esp. in the social sciences and humanities, but really in all fields of higher ed, seem to have no sense of what a catastrophe could be starting in about a week

Amy Diehl, Ph.D. (@amydiehl) 's Twitter Profile Photo

Study of 554 resumes & 571 job descriptions evaluated by AI (3 LLM's) finds they favored white names 85% of the time; female names only 11%. The AI preferred white men even for female-dominated roles, like HR workers. Lisa Stiffler h/t Jess Calarco geekwire.com/2024/ai-overwh…

Angelina Wang @angelinawang.bsky.social (@ang3linawang) 's Twitter Profile Photo

There are numerous leaderboards for AI capabilities and risks, for example fairness. In new work, we argue that leaderboards are misleading when the determination of concepts like "fairness" is always contextual. Instead, we should use benchmark suites.

There are numerous leaderboards for AI capabilities and risks, for example fairness. In new work, we argue that leaderboards are misleading when the determination of concepts like "fairness" is always contextual. Instead, we should use benchmark suites.