Patrick Loeber(@patloeber) 's Twitter Profile Photo

One of the YouTube videos I'm most proud. 730K people watched it🤯

Deep Learning With PyTorch - Full Course

However, it's 3 years old. Should I create a fresh one?

One of the YouTube videos I'm most proud. 730K people watched it🤯

Deep Learning With PyTorch - Full Course

However, it's 3 years old. Should I create a fresh one?
account_circle
jack morris(@jxmnop) 's Twitter Profile Photo

my take on all this llm.c stuff is that it’s very impressive, and karpathy is certainly brilliant, but it was sort of a futile exercise in that llm.c will never be as fast nor as simple as pytorch

this whole movement to “write everything in the lowest language possible” is a tad…

account_circle
Glaze at UChicago(@TheGlazeProject) 's Twitter Profile Photo

For artists w/ NVidia GTX1660/1650/1550 GPUs.
We have not fixed PyTorch, but we have a simple workaround to disable the GPU when running Glaze/nightshade.

Download appropriate .bat file, put it into your glaze/nightshade directory, run it instead of the exe file. Links below...

account_circle
Andrej Karpathy(@karpathy) 's Twitter Profile Photo

A few new CUDA hacker friends joined the effort and now llm.c is only 2X slower than PyTorch (fp32, forward pass) compared to 4 days ago, when it was at 4.2X slower 📈

The biggest improvements were:
- turn on TF32 (NVIDIA TensorFLoat-32) instead of FP32 for matmuls. This is a…

A few new CUDA hacker friends joined the effort and now llm.c is only 2X slower than PyTorch (fp32, forward pass) compared to 4 days ago, when it was at 4.2X slower 📈

The biggest improvements were:
- turn on TF32 (NVIDIA TensorFLoat-32) instead of FP32 for matmuls. This is a…
account_circle
Om Alve(@alve_om) 's Twitter Profile Photo

Implemented LoRA in Pytorch and finetuned DistilBert on the IMDb reviews dataset by Stanford, the results are in the thread below

Implemented LoRA in Pytorch and finetuned DistilBert on the IMDb reviews dataset by Stanford, the results are in the thread below
account_circle
Kirk Borne(@KirkDBorne) 's Twitter Profile Photo

GitHub repository with everything you need to become proficient in , with 15 implemented projects: github.com/Coder-World04/… — compiled by Naina Chaturvedi

See this book: amzn.to/3eC3x2p
——

GitHub repository with everything you need to become proficient in #PyTorch, with 15 implemented projects: github.com/Coder-World04/… — compiled by @NainaChaturved8 
➕
See this book: amzn.to/3eC3x2p
——
#DataScience #DataScientists #AI #MachineLearning #Python #DeepLearning
account_circle
Rohan Paul(@rohanpaul_ai) 's Twitter Profile Photo

✨ Thunder lib from Lightning AI ⚡️ looks great - can achieve 40% speedup in training (over standard PyTorch eager code.) throughput compared to eager code on H100 using a combination of executors including nvFuser, `torch.compile`, cuDNN, and TransformerEngine FP8.

📌 Supports…

✨ Thunder lib from @LightningAI looks great - can achieve 40% speedup in training (over standard PyTorch eager code.) throughput compared to eager code on H100 using a combination of executors including nvFuser, `torch.compile`, cuDNN, and TransformerEngine FP8.

📌 Supports…
account_circle
The Silver Ape(@platacrypto) 's Twitter Profile Photo

The intersection of | | | - $DNX

> low barrier to participate through established AI frameworks such as IBM Quiskit/Python/PyTorch/DWave/Scikit

> immediate access to a decentralized infrastructure (DePIN) of over 30 petaflops of GPU power, which matches…

The intersection of #DePIN | #DeSci | #PoUW | #AI  - $DNX

> low barrier to participate through established AI frameworks such as IBM Quiskit/Python/PyTorch/DWave/Scikit

> immediate access to a decentralized infrastructure (DePIN) of over 30 petaflops of GPU power, which matches…
account_circle