Kyle Maclean (@analyticskyle) 's Twitter Profile
Kyle Maclean

@analyticskyle

ID: 4174459695

linkhttps://ca.linkedin.com/in/kyle-maclean-69931727 calendar_today12-11-2015 22:36:45

1,1K Tweet

467 Takipçi

192 Takip Edilen

Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

It seems like there would be efficiency gains if LLMs were trained on a single language, and a different model focuses on translation. And yet, that usually isn't what happens.

Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

I created a normal distribution infographic with my school's branding from the new OpenAI model (left) and Gemini Nano Banana Pro (right). The text in both is very well done, but Gemini still has the edge, at least in terms of adherence to branding. OpenAI made up a slogan.

I created a normal distribution infographic with my school's branding from the new OpenAI model (left) and Gemini Nano Banana Pro (right). The text in both is very well done, but Gemini still has the edge, at least in terms of adherence to branding. OpenAI made up a slogan.
Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

I have a niche question for anyone familiar with OpenAI billing. I added prepaid funds of $175. Separately (!!) I was awarded $175 in credit grands in late November. What's odd is that my usage seems to be decreasing BOTH? It should reduce credit grant first OpenAI Developers

I have a niche question for anyone familiar with <a href="/OpenAI/">OpenAI</a> billing. I added prepaid funds of $175. Separately (!!) I was awarded $175 in credit grands in late November. What's odd is that my usage seems to be decreasing BOTH? It should reduce credit grant first <a href="/OpenAIDevs/">OpenAI Developers</a>
Joachim Voth (@joachim_voth) 's Twitter Profile Photo

How did people in 1913 see the world? How did they think about the future? We trained LLMs exclusively on pre-1913 texts—no Wikipedia, no 20/20. The model literally doesn't know WWI happened. Announcing the Ranke-4B family of models. Coming soon: github.com/DGoettlich/his…

How did people in 1913 see the world? How did they think about the future? We trained LLMs exclusively on pre-1913 texts—no Wikipedia, no 20/20. The model literally doesn't know WWI happened. Announcing the Ranke-4B family of models. Coming soon: github.com/DGoettlich/his…
Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

If we interpret this as the P(Winning), it is an underspecified question. For a specific individual, for a random individual, for a random team in these leagues? If the latter, then they are the same - they have the same number of teams. The P(Win) for a random team is 1/32.

Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

As a journalist, you can frame almost anything. If the facts are on your side, you can claim the public simply doesn’t understand them. If they aren’t, you can emphasize that the public has “concerns" and elevate anecdotal issues to the forefront.

Kevin A. Bryan (@afinetheorem) 's Twitter Profile Photo

On Dec 1 last year, no good Gemini model at all (we were at 1.5), no image model that got text right, no good video model at all, no Deepseek R1, o1 had just come out with test time inference, FrontierMath was 2% not 41%, no one got to 10% on HLE...Just so you can plan for 2026.

_its_not_real_ (@_its_not_real_) 's Twitter Profile Photo

Absolutely wild how dead the open internet is. Everything is on TikTok, Instagram, Discord, or Facebook groups now. None of those are searchable (if Google even worked). Reddit wikis and discussion are ghost towns. Non-academic research is impossible.

OpenAI (@openai) 's Twitter Profile Photo

Introducing Prism, a free workspace for scientists to write and collaborate on research, powered by GPT-5.2. Available today to anyone with a ChatGPT personal account: prism.openai.com

Kyle Maclean (@analyticskyle) 's Twitter Profile Photo

I've said it before and I'll say it again, OpenAI culture spends way too much effort on creating interest/hype. I'm not sure why. It seems like it is more likely to lead to let downs.