Manuel Renner (@manu_rnnr) 's Twitter Profile
Manuel Renner

@manu_rnnr

ID: 1711690681926701056

calendar_today10-10-2023 10:30:42

26 Tweet

17 Followers

51 Following

Ethan Mollick (@emollick) 's Twitter Profile Photo

In a new paper showing that AI comes up with more effective prompts for other AIs than humans do, there is this gem that shows how weird AIs are... The single most effective prompt was to start by telling the AI "Take a deep breath and work step-by-step!" arxiv.org/pdf/2309.03409…

In a new paper showing that AI comes up with more effective prompts for other AIs than humans do, there is this gem that shows how weird AIs are...

The single most effective prompt was to start by telling the AI "Take a deep breath and work step-by-step!" arxiv.org/pdf/2309.03409…
LlamaIndex 🦙 (@llama_index) 's Twitter Profile Photo

Emotion Prompting ❤️‍🩹 The recent EmotionPrompt paper (Li et al.) shows that you can improve task performance across a ton of LLMs by simply adding statements like “This is very important to my career” Easily try it out + benchmark it yourself! 👇 We’ve added a full cookbook 🧑

Emotion Prompting ❤️‍🩹

The recent EmotionPrompt paper (Li et al.) shows that you can improve task performance across a ton of LLMs by simply adding statements like “This is very important to my career”

Easily try it out + benchmark it yourself! 👇

We’ve added a full cookbook 🧑
Diverger (@diverger_ai) 's Twitter Profile Photo

Estamos siguiendo en directo el #OpenAIDevDay y escuchando las novedades que está empezando a detallar Sam Altman in stage; entre las más destacadas: 🎉 Nuevo modelo: GPT-4 Turbo. 1️⃣ Mayor longitud de contexto: 128k tokens 📅 Ventana de conocimiento: hasta abril 2023.

Greg Kamradt (@gregkamradt) 's Twitter Profile Photo

Pressure Testing GPT-4-128K With Long Context Recall 128K tokens of context is awesome - but what's performance like? I wanted to find out so I did a “needle in a haystack” analysis Some expected (and unexpected) results Here's what I found: Findings: * GPT-4’s recall

Pressure Testing GPT-4-128K With Long Context Recall

128K tokens of context is awesome - but what's performance like?

I wanted to find out so I did a “needle in a haystack” analysis

Some expected (and unexpected) results

Here's what I found:

Findings:
* GPT-4’s recall
Diverger (@diverger_ai) 's Twitter Profile Photo

Interesante análisis sobre cómo medir la performance de tu pipeline de #RetrievalAugmentedGeneration a través de distintos modelos de #embeddings y #rerankers utilizando el módulo Retrieval Evaluation de LlamaIndex 🦙 📊 🔢 buff.ly/3swok3W

Ed Newton-Rex (@ednewtonrex) 's Twitter Profile Photo

I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’. First off, I want to say that there are lots of people at Stability who are deeply

Manuel Renner (@manu_rnnr) 's Twitter Profile Photo

🚀 Staying on track with our commitment to open-source software, we’ve released v0.2 of #codeas. This new release makes our coding assistant smarter and easier to use. Happy coding! Link to the repo: github.com/DivergerThinki…

Diverger (@diverger_ai) 's Twitter Profile Photo

¿Qué se entiende por #finetuning para #LLMs? 🧐 El #finetuning para #LLMs permite especializar las capacidades de un modelo y optimizar su rendimiento en un ámbito específico. iTor desgrana todas las claves del proceso en el siguiente post 👉🏻 diverger.medium.com/llm-fine-tunin… 🧵

Jose Luis Calvo (@joselcs) 's Twitter Profile Photo

1/ Hace unos días hubo una interesante mesa redonda en Davos sobre inteligencia artificial con cinco investigadores muy destacados. Voy a extraer lo que me ha parecido más interesante respecto a evolución de la IA y el Open Source. Abro hilo 👇

1/ Hace unos días hubo una interesante mesa redonda en Davos sobre inteligencia artificial con cinco investigadores muy destacados.
Voy a extraer lo que me ha parecido más interesante respecto a evolución de la IA y el Open Source.

Abro hilo 👇
Diverger (@diverger_ai) 's Twitter Profile Photo

🧑‍💻 En ciertos casos, usar programación asíncrona en el desarrollo de tus aplicaciones con #LLMs puede optimizar notablemente su desempeño. En este artículo, Manuel Renner profundiza en diferentes técnicas para construirlas en Python: diverger.medium.com/building-async… Abrimos 🧵:

Jeff Dean (@jeffdean) 's Twitter Profile Photo

Gemini 1.5 Pro - A highly capable multimodal model with a 10M token context length Today we are releasing the first demonstrations of the capabilities of the Gemini 1.5 series, with the Gemini 1.5 Pro model. One of the key differentiators of this model is its incredibly long

Gemini 1.5 Pro - A highly capable multimodal model with a 10M token context length

Today we are releasing the first demonstrations of the capabilities of the Gemini 1.5 series, with the Gemini 1.5 Pro model.  One of the key differentiators of this model is its incredibly long
Will Bryk (@williambryk) 's Twitter Profile Photo

Thoughts on the eve of AGI I talked to several friends about o3 this week. Their summarized response is basically "holy crap is this actually happening?" Yes, this is actually happening. The next few years are going to be insane. This is historic stuff, galactic even. What's