jimmychung.eth (@jimmy_c28) 's Twitter Profile
jimmychung.eth

@jimmy_c28

Founding Engineer @ Contend Legal | Love of all things AI, Web3, Startups and Investing!

ID: 785352169759309824

linkhttps://linktr.ee/jimmychung calendar_today10-10-2016 05:32:03

2,2K Tweet

176 Followers

1,1K Following

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

🚀🤖 Johnson & Johnson is piloting an AI-powered “Rep Copilot” — a tool that helps coach sales reps in real time on how to engage professionals more effectively 🗣️ First launched in their Innovative Medicine unit (oncology + breakthrough treatments), it's now expanding to their

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

When building with LLMs, we often outgrow simple sequential chains. Multi-agent collaboration, tool usage, dynamic control flow — these require a new kind of abstraction. LangGraph solves this by modeling LLM applications as stateful graphs: 🛠️ Nodes are functions, tools, or

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

LangGraph Supervisors orchestrate multi-agent workflows by controlling execution at every step. 🛠️ Instead of hardcoding behavior into agents, the Supervisor dynamically: ➡️ Selects which agent to call next based on the current state ♻️ Loops, retries, or redirects execution

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

Unlocking Code Intelligence with Abstract Syntax Trees (ASTs) 🌲⚙️ Abstract Syntax Trees are the foundation of most modern developer tooling — from linters and compilers to static analyzers, transpilers, and even AI-assisted coding. An AST is a structured, tree-based

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

GraphRAG is what happens when you give RAG a memory structure 🧠 Instead of just retrieving text chunks based on vector similarity, GraphRAG adds a graph layer — enabling context-aware retrieval across relationships like parent-child, references, or dependencies. How it works:

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

🚀 LoRA & QLoRA: Turbocharging LLM Fine-Tuning Without Breaking the Bank 🧠💸 Training large language models (LLMs) used to mean massive compute bills 💰 and GPU farms 🖥️. Enter LoRA and QLoRA — two game-changing techniques making fine-tuning efficient and accessible. 🔹 LoRA

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

⚙️🚀 Speculative Decoding: Accelerating LLM Inference with Parallel Token Validation LLMs like GPT-4, Claude, and Gemini are powerful—but inference can be 🐢 slow. Enter Speculative Decoding: a technique that cuts latency 📉 without sacrificing output quality 🎯. 🧠 Core Idea:

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

🧊 Understanding and Mitigating Cold Starts in AWS Lambda 🛠️ Cold starts in AWS Lambda occur when a new execution environment needs to be initialized — typically after a period of inactivity or when scaling beyond warm instances. What actually happens during a cold start? 1.

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

Speculative decoding speeds up LLM inference by using a small draft model to generate k tokens ahead, then having the large target model verify them in parallel. If the draft tokens match the target model’s logits, we accept them. If not, we rewind to the last good token and

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

ASML is the most important tech company you've never heard of. Without its EUV machines, advanced chips from Apple, NVIDIA, or TSMC simply wouldn't exist. Monopoly on critical tech, 70%+ margins, and a backlog that stretches years. $ASML is the bottleneck and the moat.

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

Code Knowledge Graphs (CKGs) model software as a typed, multi-relational graph. This extends beyond ASTs by linking semantic dependencies project-wide, enabling precise impact analysis & advanced static analysis. #softwareengineering

jimmychung.eth (@jimmy_c28) 's Twitter Profile Photo

Google Play's policy for new individual accounts requiring 12 testers for 14 days before launch is a real hurdle. 😩 It slows down innovation & puts huge pressure on solo devs. Is this really the best way to ensure quality? #MobileDev #AppDev #GooglePlay

Claude (@claudeai) 's Twitter Profile Photo

Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase. Process over 75,000 lines of code or hundreds of documents in a single request.

Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase.

Process over 75,000 lines of code or hundreds of documents in a single request.