James Baker (@jamesbbaker4) 's Twitter Profile
James Baker

@jamesbbaker4

PhD student @ColumbiaDBMI, Co-Founder @ClinicaAI, prev @McKinsey, @Yale

ID: 917504987600707585

calendar_today09-10-2017 21:40:10

153 Tweet

2,2K Takipçi

622 Takip Edilen

James Baker (@jamesbbaker4) 's Twitter Profile Photo

Working on an AI agent that can research and write on any topic with accurate references. It improves on Bing as it's able to learn and focus across searches. Follow to learn more in coming days!

James Baker (@jamesbbaker4) 's Twitter Profile Photo

How would you use a GPT agent with the ability to search for info and synthesize it into a final output? Feel free to share other uses you're excited about (I think we're just scratching the surface)

James Baker (@jamesbbaker4) 's Twitter Profile Photo

AutoGPT for research✍️. Search the web to learn and write on any topic. 1) Write a podcast script, article, or lit review 2) Learn about any topic faster 3) Much more App releasing next Sunday! 🧵(1/2)

James Baker (@jamesbbaker4) 's Twitter Profile Photo

EMR coding isn't very mainstream but finding codes for a disease or therapy takes up a large part of my day as a clinical data scientist I gave GPT the most recent codes and guidelines and it helps me find and learn about codes 10x faster Available here: med-coder.com

Matthew Lungren MD MPH (@mattlungrenmd) 's Twitter Profile Photo

Ready to move beyond prompting and take your use case with LLMs to the next level? Check out this terrific content on grounding, retrieval augmented generation, + semantic search/vector databases - all critical for healthcare builders to learn/know! 🧐 techcommunity.microsoft.com/t5/fasttrack-f…

elvis (@omarsar0) 's Twitter Profile Photo

Demystifying GPT Self-Repair for Code Generation We've seen a couple of papers showing the promise of self-repair in code generation. This paper finds that modest performance gains are seen when using GPT-4 for textual feedback. Another interesting finding: significant

Demystifying GPT Self-Repair for Code Generation

We've seen a couple of papers showing the promise of self-repair in code generation. This paper finds that modest performance gains are seen when using GPT-4 for textual feedback.

Another interesting finding: significant
David (@dzhng) 's Twitter Profile Photo

The steerability of OpenAI's new 0613 models are amazing. Even if you force the model to call a function despite giving it a unrelated user prompt, it'll still keep the same JSON shape, and tries its best to map the user's prompt to the correct keys.

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification With GPT-4 Code Interpreter and CSV, we achieve an impressive zero-shot accuracy on MATH dataset (53.9% → 84.3%). arxiv.org/abs/2308.07921

Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification

With GPT-4 Code Interpreter and CSV, we achieve an impressive zero-shot accuracy on MATH dataset (53.9% → 84.3%).

arxiv.org/abs/2308.07921
James Baker (@jamesbbaker4) 's Twitter Profile Photo

Excited to announce Clinica AI! Over 75% of patients receive suboptimal treatment at some point - Clinica works with healthcare orgs to target education towards docs, systems, and insurance cos with the highest unmet need to improve care directly

Clinica AI (@clinicaai) 's Twitter Profile Photo

🔬 Conduct faster, more accurate clinical trials with fewer patients 🚀 At Clinica AI, we’re using cutting-edge machine learning to predict patient outcomes, and design more balanced and accurate clinical trials. Watch the full video to learn more! 🎥👇