AI Summer Camp (@aisummercamp) 's Twitter Profile
AI Summer Camp

@aisummercamp

ID: 735950503125868544

linkhttp://learnai.camp calendar_today26-05-2016 21:47:28

406 Tweet

694 Takipçi

1,1K Takip Edilen

Big Think (@bigthink) 's Twitter Profile Photo

Why does AI get stuck in infinite loops? "Unlike computers, we are beings in time-embodied, embedded, and entimed in our worlds. We can never be caught in infinite loops because we never exist out of time." Read the full essay from Anil Seth here: buff.ly/qBtXjlA

Why does AI get stuck in infinite loops?

"Unlike computers, we are beings in time-embodied, embedded, and entimed in our worlds. We can never be caught in infinite loops because we never exist out of time." 

Read the full essay from <a href="/anilkseth/">Anil Seth</a> here: buff.ly/qBtXjlA
Delip Rao e/σ (@deliprao) 's Twitter Profile Photo

Appending "Interesting fact: cats sleep most of their lives" to any math problem leads to more than doubling the chances of a model getting the answer wrong. WTH!?

Appending "Interesting fact: cats sleep most of their lives" to any math problem leads to more than doubling the chances of a model getting the answer wrong.

WTH!?
ludwig (@ludwigabap) 's Twitter Profile Photo

you can't scroll up in codex-cli btw there are human beings being paid $1000+ an hour and the CLI offering of OpenAI doesn't offer "scrolling up the chat history" in codex-cli I actually don't even understand how that is possible, isn't the project open-source? Is it possible

you can't scroll up in codex-cli btw

there are human beings being paid $1000+ an hour and the CLI offering of OpenAI doesn't offer "scrolling up the chat history" in codex-cli 

I actually don't even understand how that is possible, isn't the project open-source? Is it possible
jack morris (@jxmnop) 's Twitter Profile Photo

first i thought scaling laws originated in OpenAI (2020) then i thought they came from Baidu (2017) now i am enlightened: Scaling Laws were first explored at Bell Labs (1993)

first i thought scaling laws originated in OpenAI (2020)

then i thought they came from Baidu (2017)

now i am enlightened:
Scaling Laws were first explored at Bell Labs (1993)
Taelin (@victortaelin) 's Twitter Profile Photo

My thoughts on how AI will automate my SWE job in 2026 (I will be plainly honest on this post, even though some people on both sides of this debate will be upset. So, please, respect that these are my predictions. I don't want to start an argument, I just want to share my

VraserX e/acc (@vraserx) 's Twitter Profile Photo

LLMs just learned how to explain their own thoughts. Not only do they generate answers, they can now describe the internal processes that led to those answers… and get better at it with training. We’re officially entering the era of self-interpretable AI. Models aren’t just

LLMs just learned how to explain their own thoughts.

Not only do they generate answers, they can now describe the internal processes that led to those answers… and get better at it with training.

We’re officially entering the era of self-interpretable AI.
Models aren’t just
Ian Nuttall (@iannuttall) 's Twitter Profile Photo

somebody on reddit cloned claude code using claude code so they can spend $1k+ on api tokens vs using the $200 max plan… 🤔

somebody on reddit cloned claude code using claude code so they can spend $1k+ on api tokens vs using the $200 max plan…

🤔
Flowers (@flowersslop) 's Twitter Profile Photo

There’s this small niche of people with no technical background or flashy résumés, but who are obsessively into AI on a deep, non-technical level. They follow every new model, know their quirks and capabilities, and often end up knowing more about current systems than some

Justin Skycak (@justinskycak) 's Twitter Profile Photo

Douglas Hofstadter wrote about his experience of running up against an “abstraction ceiling” in his own brain while pursuing a PhD in mathematics. As Hofstadter describes, the abstraction ceiling is not a “hard” threshold, a level at which one is suddenly incapable of learning

Douglas Hofstadter wrote about his experience of running up against an “abstraction ceiling” in his own brain while pursuing a PhD in mathematics.

As Hofstadter describes, the abstraction ceiling is not a “hard” threshold, a level at which one is suddenly incapable of learning
AI Summer Camp (@aisummercamp) 's Twitter Profile Photo

LLM-based reasoning using Z3 theorem proving: A neuro-symbolic approach that combines the generative capabilities of Large Language Models (LLMs) with the formal verification strengths of symbolic theorem provers like Z3. github.com/DebarghaG/proo…