Michael Walker (@mwalk10) 's Twitter Profile
Michael Walker

@mwalk10

Lover of music, games, and AI

ID: 6547842

linkhttps://github.com/micwalk calendar_today03-06-2007 20:21:36

804 Tweet

135 Followers

644 Following

Joseph Noel Walker is in SF (@josephnwalker) 's Twitter Profile Photo

David Deutsch (David Deutsch) on the poverty of 'P(Doom)': "If you ask somebody, 'What's your subjective probability for AI Doom?', well, if they say anything other than zero or one, then your interlocutor has already won the argument. Because even if you said 'one in a

Beff – e/acc (@basedbeffjezos) 's Twitter Profile Photo

So only neutered models allowed and AI safety team as mandatory?... 🙄 This is why the AI Safetyist industrial complex is making themselves instrumental to the incumbents trying to achieve regulatory capture; they are guaranteeing themselves jobs forever.

Sebastian Raschka (@rasbt) 's Twitter Profile Photo

The "Hello World"s of machine learning & AI: 2013: RandomForestClassifier on Iris 2015: XGBoost on Titanic 2017: MLPs on MNIST 2019: AlexNet on Cifar-10 2021: DistilBERT on IMDb movie reviews 2023: Llama 2 on Alpaca 50k?

Supasorn Suwajanakorn (@supasornaek) 's Twitter Profile Photo

Introducing DiffusionLight---a simple yet effective technique to estimate lighting from any in-the-wild input image. How? ... by inpainting a chrome ball into the image with diffusion models! (1/3) paper: arxiv.org/abs/2312.09168 diffusionlight.github.io huggingface.co/DiffusionLight…

Carlos E. Perez (@intuitmachine) 's Twitter Profile Photo

1/n A new paper has some fascinating results with applying Thinking Slow (i.e., System 2) on Large Language Models. 1. Direct linear correlation between number of steps and accuracy: - For few-shot CoT, longer reasoning chains directly translate to higher accuracy across

1/n A new paper has some fascinating results with applying Thinking Slow (i.e., System 2) on Large Language Models. 

1. Direct linear correlation between number of steps and accuracy:
- For few-shot CoT, longer reasoning chains directly translate to higher accuracy across
Alex Carlier (@alexcarliera) 's Twitter Profile Photo

This scene was scanned using only 3 pictures 🤯 In my opinion, this was the biggest flaw of NeRFs & 3D Gaussian splats: they are trained from scratch every time with no knowledge of the world. With ReconFusion, we now acquire it from diffusion models More examples below ⬇️⬇️

Lior⚡ (@lioronai) 's Twitter Profile Photo

Impressive. Microsoft released a new method to speed up LLM inference, boost performance, while making them 20x smaller. Massive cost reduction with almost no performance loss. You can implement it in 2 minutes using their library: !pip install llmlingua LLMLingua uses a

Impressive. Microsoft released a new method to speed up LLM inference, boost performance, while making them 20x smaller.

Massive cost reduction with almost no performance loss.

You can implement it in 2 minutes using their library: 
!pip install llmlingua

LLMLingua uses a
Jiyoung Lee (@jiyounglee2g0) 's Twitter Profile Photo

🚨 New Research Alert! After a long journey, our research with @Iciralabama is out! 📚 Key findings: ✅Repeated exposure to myths within corrections increased perceived familiarity.✅This effect heightened misinformation credibility, even among those with low prior beliefs.

🚨 New Research Alert! After a long journey, our research with @Iciralabama is out! 📚
Key findings: 
✅Repeated exposure to myths within corrections increased perceived familiarity.✅This effect heightened misinformation credibility, even among those with low prior beliefs.
Dreaming Tulpa 🥓👑 (@dreamingtulpa) 's Twitter Profile Photo

Computer, enhance! SUPIR is a new high-fidelity general image restoration model! PP: supir.xpixel.group Code: github.com/Fanghua-Yu/SUP…

Grant♟️ (@granawkins) 's Twitter Profile Photo

I built a natural language CLI. It generates Python scripts to answer your question, then auto-executes them in the cwd. You will not believe how capable this simple pattern is. Rawdogging gpt-4 from the command line. Rawdog. 1/

AK (@_akhaliq) 's Twitter Profile Photo

Stability releases Stable Cascade demo: huggingface.co/spaces/multimo… github: github.com/Stability-AI/S… a new text to image model building upon the Würstchen architecture

Stability releases Stable Cascade

demo: huggingface.co/spaces/multimo…

github: github.com/Stability-AI/S…

a new text to image model building upon the Würstchen architecture
Bill Peebles (@billpeeb) 's Twitter Profile Photo

"a giant cathedral is completely filled with cats. there are cats everywhere you look. a man enters the cathedral and bows before the giant cat king sitting on a throne." Video generated by Sora.

Ian Fisch - Director of Kingmakers (wishlist!) (@ian_fisch) 's Twitter Profile Photo

Please wishlist my new game KingMakers on Steam. We've been working really hard on this for 5 years, and can finally unveil it to the public today. Hope you guys like it 🤞

AK (@_akhaliq) 's Twitter Profile Photo

YOLOv9 Learning What You Want to Learn Using Programmable Gradient Information Today's deep learning methods focus on how to design the most appropriate objective functions so that the prediction results of the model can be closest to the ground truth. Meanwhile, an appropriate

YOLOv9

Learning What You Want to Learn Using Programmable Gradient Information

Today's deep learning methods focus on how to design the most appropriate objective functions so that the prediction results of the model can be closest to the ground truth. Meanwhile, an appropriate
Vivek Raghunathan (@vivek7ue) 's Twitter Profile Photo

A lot of the insider knowledge on how to build an LLM has gone underground in the last 24 months. We are going to build #SnowflakeArctic in the open Model arch ablations, training and inference system performance, dataset and data composition ablations, post-training fun, big

virat (@virattt) 's Twitter Profile Photo

Llama 3 crushed my financial metrics tests I tested both the 70B and 8B models. Both aced the metric calculation tasks. The result from today’s tests indicate an emergence of 4 distinct LLM tiers: • thropughput tier • workhorse tier • intelligence tier • groq tier Groq

Llama 3 crushed my financial metrics tests

I tested both the 70B and 8B models.

Both aced the metric calculation tasks.

The result from today’s tests indicate an emergence of 4 distinct LLM tiers:

• thropughput tier
• workhorse tier
• intelligence tier
• groq tier

Groq
Shahzod Boyhonov 🔶️ (@specoolar) 's Twitter Profile Photo

I made a small node setup that is similar to 9-slice sprites but for 3D meshes. It prevents the 'corners' of the object from distorting when scaled. Also, you can assign vertex groups to prevent some parts of the object from scaling. Get here for free: github.com/specoolar/Blen…