GosuCoder (@gosucoder) 's Twitter Profile
GosuCoder

@gosucoder

Programmer that Loves to Share Things on YouTube especially about AI and new technology!

ID: 898600278584426496

calendar_today18-08-2017 17:39:36

383 Tweet

622 Takipçi

31 Takip Edilen

GosuCoder (@gosucoder) 's Twitter Profile Photo

3 things that I believe are true now. 1. AI coding LLMs are converging with incremental improvements from here on out. Each will have variance, and be better or worse at certain tasks. 2. The crazy hype around AI replacing devs will continue to fizzle more and more, as people

GosuCoder (@gosucoder) 's Twitter Profile Photo

First time i've seen a model recommend a temperature higher than 1. Maybe there was others, but I can't recall any. Bytedance OSS Seed 36b recommends a temp of 1.1, its so funny to me we have these giant corpuses of human knowledge that we control with a single float. Also I

First time i've seen a model recommend a temperature higher than 1. Maybe there was others, but I can't recall any.

Bytedance OSS Seed 36b recommends a temp of 1.1, its so funny to me we have these giant corpuses of human knowledge that we control with a single float.

Also I
GosuCoder (@gosucoder) 's Twitter Profile Photo

So fascinating what I've discovered when only using local models to code. My biggest take away, is we need adjustable api timeouts in AI coding tools. I think some of this might be the default timeout of fetch, but there is nothing more painful than having a task run only to

So fascinating what I've discovered when only using local models to code.

My biggest take away, is we need adjustable api timeouts in AI coding tools. I think some of this might be the default timeout of fetch, but there is nothing more painful than having a task run only to
GosuCoder (@gosucoder) 's Twitter Profile Photo

I am blown away. Grok Code now live on OpenRouter and the price is way better than I expected, and it appears to have prompt caching from day 1? $0.20 per million input, $1.50 per million output, is absolutely nuts. If this performs the same as Sonic, its not gonna be the best

I am blown away. Grok Code now live on OpenRouter and the price is way better than I expected, and it appears to have prompt caching from day 1?

$0.20 per million input, $1.50 per million output, is absolutely nuts.

If this performs the same as Sonic, its not gonna be the best
GosuCoder (@gosucoder) 's Twitter Profile Photo

Holy Comments! This reminds me of how I use to code when I was 18. I have legit never seen so many comments from an AI model Lgai-exaone-4.0.1-32b. I stopped it thinking it hung but nope it was still going: 41.52 tok/sec 84,306 tokens Just take a second to look at what it

Holy Comments! This reminds me of how I use to code when I was 18. I have legit never seen so many comments from an AI model

Lgai-exaone-4.0.1-32b.

I stopped it thinking it hung but nope it was still going:
41.52 tok/sec
84,306 tokens

Just take a second to look at what it
GosuCoder (@gosucoder) 's Twitter Profile Photo

OpenAI: "DEVS" tune in at 10am pt: Me: Woot lets go, maybe some codex show and tell. OpenAI: Here's TMobile and how they are using our speech to speech model to do customer service. 1. No talk about configuration of the API 2. No details about when/if it will roll out in the UI

OpenAI: "DEVS" tune in at 10am pt:
Me: Woot lets go, maybe some codex show and tell.
OpenAI: Here's TMobile and how they are using our speech to speech model to do customer service.

1. No talk about configuration of the API
2. No details about when/if it will roll out in the UI
GosuCoder (@gosucoder) 's Twitter Profile Photo

Finally we have a compact prompt for local models, i've spent way to much time building these and overriding them myself. Cline I pick on you for your lack of customization sometimes but this is a killer feature, and on top of being able to fully handle the settings on the

Finally we have a compact prompt for local models, i've spent way to much time building these and overriding them myself. <a href="/cline/">Cline</a> I pick on you for your lack of customization sometimes but this is a killer feature, and on top of being able to fully handle the settings on the
GosuCoder (@gosucoder) 's Twitter Profile Photo

Before watching I’m going to guess. 1. Merging/conflicts 2. PRs chaining off another 3. No dependency file locking, Perforce had this many years ago 4. Large binary files, it’s better now but in Perforce back in the day I could control how many revisions to keep of large binary

Before watching I’m going to guess.

1. Merging/conflicts
2. PRs chaining off another
3. No dependency file locking, Perforce had this many years ago
4. Large binary files, it’s better now but in Perforce back in the day I could control how many revisions to keep of large binary
GosuCoder (@gosucoder) 's Twitter Profile Photo

Dude quotes himself as source that Claude quantizes down to Q1 during the day. What is even happening right now? Its cool to be wrong or guessing! You know, just be clear about that. We are all learning, just don't mislead people with false claims and then try doubling down

Dude quotes himself as source that Claude quantizes down to Q1 during the day.

What is even happening right now?

Its cool to be wrong or guessing! You know, just be clear about that. We are all learning, just don't mislead people with false claims and then try doubling down
GosuCoder (@gosucoder) 's Twitter Profile Photo

I’m not sure I like the memory system on ChatGPT. I’ve asked it hypothetical questions about computer upgrades for running LLMs and it keeps thinking I already have my hypothetical setups running LLMs. New chat, for some reason it thinks I already have my RTX 5090 running on my

I’m not sure I like the memory system on ChatGPT. I’ve asked it hypothetical questions about computer upgrades for running LLMs and it keeps thinking I already have my hypothetical setups running LLMs.

New chat, for some reason it thinks I already have my RTX 5090 running on my