Marco Sobrevilla (@msobrevillac) 's Twitter Profile
Marco Sobrevilla

@msobrevillac

I like talking about NLP, Politics, Education and Football :)
#nlproc

ID: 381758069

calendar_today28-09-2011 21:57:25

1,1K Tweet

431 Followers

1,1K Following

⿻ Andrew Trask (@iamtrask) 's Twitter Profile Photo

I wrote #beginner level book teaching Deep Learning - its goal is to be the easiest intro possible In the book, each lesson builds a neural component *from scratch* in #NumPy Each *from scratch* toy code example is in the Github below #100DaysOfMLCode github.com/iamtrask/Grokk…

I wrote #beginner level book teaching Deep Learning - its goal is to be the easiest intro possible

In the book, each lesson builds a neural component *from scratch* in #NumPy

Each *from scratch* toy code example is in the Github below

#100DaysOfMLCode

github.com/iamtrask/Grokk…
Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

1/n A counterintuitive and surprinsing discovery about LLMs The principle of "reasoning continuity over accuracy" refers to the surprising finding that in chain-of-thought (CoT) prompting, maintaining the logical flow and progression of the reasoning chain matters more for the

1/n A counterintuitive and surprinsing discovery about LLMs

The principle of "reasoning continuity over accuracy" refers to the surprising finding that in chain-of-thought (CoT) prompting, maintaining the logical flow and progression of the reasoning chain matters more for the
Luciana Benotti @ ICML (@lucianabenotti) 's Twitter Profile Photo

Atención #nlproc Latinoamérica: Está abierta la convocatoria de NAACL de pequeños financiamientos para iniciativas regionales. Me ayudan a difundir por favor? Deadline: 30 de abril Más info: naacl.org/calls/regional… Proyectos ya financiados: naacl.org/calls/regional…

lovodkin93 (@lovodkin93) 's Twitter Profile Photo

🐣Too Long; Didn't Read: Most LLMs cite full documents for support, but they're too long so people don't actually read them. In our new work "Attribute First, then Generate”, we introduce fine-grained attributions, where LLMs are required to highlight only relevant information

🐣Too Long; Didn't Read: Most LLMs cite full documents for support, but they're too long so people don't actually read them.
In our new work "Attribute First, then Generate”, we introduce fine-grained attributions, where LLMs are required to highlight only relevant information
siggen_acl (@siggen_acl) 's Twitter Profile Photo

#CallForPapers for #INLG2024 (23-27 Sept) If you work on #NaturalLanguageGeneration, #TextGeneration (with or w/o #LLMs) Deadlines are UTC-12 • regular papers: May 31 • ARR commitment papers: June 24 • demo papers: June 24 • Notification: July 15 inlg2024.github.io/calls.html

Natalia Sobrevilla (@n_sobrevilla) 's Twitter Profile Photo

jugo.pe/los-orwelliano… Hace 15 años un 7 de abril Fujimori fue condenado a 25 años de cárcel. Hoy Boluarte nos pide creer que no dijo lo que dijo. Mientras se busca enlodar a quienes investigan. Se quiere imponer como en 1984 el “doblepensar”. Nos toca impedirlo.

Tim Gill (@timgill924) 's Twitter Profile Photo

Grad Students: Networking is critical for success in academia. But the real question is WHO to network with. I’ve known grad students who will spend time at conferences drinking beers in the bars with grad students from unranked programs. Is that 1/

Luca Soldaini 🎀 (@soldni) 's Twitter Profile Photo

fun fact CommonCrawl contains exactly two copies of Wikipedia: one from wikipedia[.]org, the other from db0nus869y26v[.]cloudfront[.]net most public LLM datasets seem to include both deduplication is hard

Marco Sobrevilla (@msobrevillac) 's Twitter Profile Photo

Después de 4 días de fiebre >= 39, hoy amanecí solo con el cuerpo maltratado. Estos días, aunque complejos, me ayudaron a terminar de curar a Marco de la infancia y creo que eso es doble triunfo.

Ehud Reiter (@ehudreiter) 's Twitter Profile Photo

Concerned about overreliance on unrepresentative dataset in AI/Med. Eg MIMIC is great, but its data from one unit (ICU) of one highend US hospital. So showing decent results on MIMIC does not mean generally useful - but this is never mentioned in Limitations sections.

Ehud Reiter (@ehudreiter) 's Twitter Profile Photo

I worry that: (A) At a superficial level, LLMs can do amazing human-like things (B) Many NLP "evaluations" of LLMs are meaningless, and community doesnt seem to care Therefore (C) Extravagent claims are made for LLMs based on garbage evals, and taken at face value

Noam Brown (@polynoamial) 's Twitter Profile Photo

Today, I’m excited to share with you all the fruit of our effort at OpenAI to create AI models capable of truly general reasoning: OpenAI's new o1 model series! (aka 🍓) Let me explain 🧵 1/

Today, I’m excited to share with you all the fruit of our effort at <a href="/OpenAI/">OpenAI</a> to create AI models capable of truly general reasoning: OpenAI's new o1 model series! (aka 🍓) Let me explain 🧵 1/