K J (@kj1337) 's Twitter Profile
K J

@kj1337

I like 2 analyze dynamical systems 4 fun & profit | Gengarblerino up to no good

ID: 712816393

calendar_today23-07-2012 17:46:47

6,6K Tweet

544 Takipçi

916 Takip Edilen

Meet Kevin😇 (@realmeetkevin) 's Twitter Profile Photo

🚨 Howard Lutnick's family firm bought up the rights to tariff refunds for 20-30 cents on the dollar after Liberation Day last year. Today, the Supreme Court struck the tariffs down. For every $100 invested, Lutnick's sons just made 3-5x. Welcome to Crony Corrupt America.

🚨 Howard Lutnick's family firm bought up the rights to tariff refunds for 20-30 cents on the dollar after Liberation Day last year.

Today, the Supreme Court struck the tariffs down. For every $100 invested, Lutnick's sons just made 3-5x.

Welcome to Crony Corrupt America.
Peter Girnus (@gothburz) 's Twitter Profile Photo

"Find complex vulnerabilities" and the first demo is subprocess.Popen with shell=True and an unsanitized f-string. Bandit catches this. Semgrep catches this. A CS201 midterm catches this. The $20/month tool found a free-tier finding.

"Find complex vulnerabilities" and the first demo is subprocess.Popen with shell=True and an unsanitized f-string. 

Bandit catches this. Semgrep catches this. A CS201 midterm catches this.

The $20/month tool found a free-tier finding.
Robert Youssef (@rryssf_) 's Twitter Profile Photo

Google DeepMind just solved one of the dirtiest problems in image generation. and the fix is almost embarrassingly elegant 🤯 every diffusion model you've used (Stable Diffusion, Flux, etc.) relies on latent representations. an encoder compresses images into a compact space, and

Google DeepMind just solved one of the dirtiest problems in image generation. and the fix is almost embarrassingly elegant 🤯

every diffusion model you've used (Stable Diffusion, Flux, etc.) relies on latent representations. an encoder compresses images into a compact space, and
Idan Beck (@idanbeck) 's Twitter Profile Photo

They hard coded the variance - meaning the VAE encoder is only predicting the mean latent distribution, then they use a scaled identity covariance for the reparam trick - and bingo bango, no more instability and you can train everything e2e Salimans strikes again!

Louis Gleeson (@aigleeson) 's Twitter Profile Photo

This is wild 🤯 A developer on GitHub just shipped a complete AI agent that: • Runs on 256MB RAM • Has 11 built-in tools • Keeps persistent memory • Ships with Telegram integration And it’s a single 12MB file. We’ve been overcomplicating agents. Link + description 👇

This is wild 🤯

A developer on GitHub just shipped a complete AI agent that:

• Runs on 256MB RAM
• Has 11 built-in tools
• Keeps persistent memory
• Ships with Telegram integration

And it’s a single 12MB file.

We’ve been overcomplicating agents.

Link + description 👇
MR GUSTAVO😼 (@k1rallik) 's Twitter Profile Photo

Someone is actively DOSing Polymarket LPs The attack is elegant & brutal: - Post an order via API - Immediately drain USDC with higher gas - Polymarket relayer tries to match → REVERT - ALL maker orders that would've filled? GONE from the book Cost per cycle: <$0.10 Time per

Someone is actively DOSing Polymarket LPs 

The attack is elegant &amp; brutal:

- Post an order via API
- Immediately drain USDC with higher gas
- Polymarket relayer tries to match → REVERT
- ALL maker orders that would've filled? GONE from the book

Cost per cycle: &lt;$0.10
Time per
Ahmad (@theahmadosman) 's Twitter Profile Photo

96GB VRAM GPU GIVEAWAY ($15K) - LAST CHANCE Re: Can it be an RTX PRO 6000 Blackwell? > “please be Blackwell” > “96GB VRAM or we riot” > “the people have spoken” The terms were simple. They were not met. Last chance. If THIS tweet (the one you’re reading) hits, in the

96GB VRAM GPU GIVEAWAY ($15K) - LAST CHANCE

Re: Can it be an RTX PRO 6000 Blackwell?

  &gt; “please be Blackwell”
  &gt; “96GB VRAM or we riot”
  &gt; “the people have spoken”

The terms were simple.

They were not met.

Last chance.

If THIS tweet (the one you’re reading)
hits, in the
Robert Youssef (@rryssf_) 's Twitter Profile Photo

Deepseek just broke the one rule every transformer has followed for a decade 🤯 x + f(x). the residual connection. if you don't know what that means, here's the simple version: every time a neural network processes your input through a layer, it keeps a copy of the original and

Deepseek just broke the one rule every transformer has followed for a decade 🤯

x + f(x). the residual connection.

if you don't know what that means, here's the simple version: every time a neural network processes your input through a layer, it keeps a copy of the original and
@chiefofautism (@chiefofautism) 's Twitter Profile Photo

someone built a FIREWALL for CLAUDE CODE that blocks PROMPT INJECTION attacks in real-time every time claude reads a file, fetches a website, or runs a command, this hook scans the output for 50+ attack patterns BEFORE claude processes it one install.sh, protects

someone built a FIREWALL for CLAUDE CODE that blocks PROMPT INJECTION attacks in real-time

every time claude reads a file, fetches a website, or runs a command, this hook scans the output for 50+ attack patterns BEFORE claude processes it

one install.sh, protects
Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

Diffusion models need exact noise level schedules to work. But recently some models have been shown to work without explicit noise conditioning. How? We show that these time-invariant fields implicitly implement a Riemannian gradient flow on some energy landscape. 1/3

Diffusion models need exact noise level schedules to work. But recently some models have been shown to work without explicit noise conditioning. How? 

We show that these time-invariant fields implicitly implement a Riemannian gradient flow on some energy landscape.

1/3
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

Google just proved that most "reasoning" models aren't actually thinking. They're just writing more tokens. Their new metric called "deep-thinking tokens" tracks where a model's internal predictions actually shift across layers before stabilizing. Translation: instead of

Google just proved that most "reasoning" models aren't actually thinking. They're just writing more tokens.

Their new metric called "deep-thinking tokens" tracks where a model's internal predictions actually shift across layers before stabilizing.

Translation: instead of
Curiosity (@mastronomers) 's Twitter Profile Photo

Locking the camera to the stars instead of the horizon changes everything. It’s actually kind of terrifying to see the Earth spinning beneath us like this. 🌍🌌

Kali Linux (@kalilinux) 's Twitter Profile Photo

Kali & LLM: macOS with Claude Desktop GUI & Anthropic Sonnet LLM: This post will focus on an alternative method of using Kali Linux, moving beyond direct terminal command execution. Instead, we will leverage a Large Language Model (LLM) to translate… kali.org/blog/kali-llm-…

Kali &amp; LLM: macOS with Claude Desktop GUI &amp; Anthropic Sonnet LLM: This post will focus on an alternative method of using Kali Linux, moving beyond direct terminal command execution. Instead, we will leverage a Large Language Model (LLM) to translate… kali.org/blog/kali-llm-…
Abdulkadir | Cybersecurity (@cyber_razz) 's Twitter Profile Photo

Simply explained: Instead of typing raw Kali Linux commands… You describe what you want in plain English The LLM translates that intent into: •the right Kali commands •scripts •or full workflows So this: “Scan this target for open ports & services” Becomes: correct

Sudo su (@sudoingx) 's Twitter Profile Photo

this is the worst local AI will ever be. tomorrow it gets faster. next month the models get smarter. next year your GPU runs what a data center runs today. Qwen3.5-35B-A3B on a single 3090. told it to visualize its own expert routing. 256 experts, 8 active per token, rendered in

Simone Margaritelli (@evilsocket) 's Twitter Profile Photo

State of security in Kali integrating AI ( kali.org/tools/mcp-kali… ): arguments are interpolated in a single command string, not escaped, so whatever the AI passes, including potential vectors for command injection, is executed. With pipes, &, ; and all that stuff like it's

State of security in Kali integrating AI ( kali.org/tools/mcp-kali… ):  arguments are interpolated in a single command string, not escaped, so whatever the AI passes, including potential vectors for command injection, is executed. With pipes, &amp;, ; and all that stuff like it's
vx-underground (@vxunderground) 's Twitter Profile Photo

Someone sent me a DM asking if a weird Minecraft thingie was malware (pinkiecraft(dot)com). I poked it with a stick > pinkiecraft(dot)com > vibe coded site > "installer" for "program" is .rar > extract .exe from .rar > .exe is normal installer > open installer > .exe and

Someone sent me a DM asking if a weird Minecraft thingie was malware (pinkiecraft(dot)com). I poked it with a stick

&gt; pinkiecraft(dot)com
&gt; vibe coded site
&gt; "installer" for "program" is .rar
&gt; extract .exe from .rar
&gt; .exe is normal installer
&gt; open installer
&gt; .exe and