Ana Paula Mofarrej (@explica_ia) 's Twitter Profile
Ana Paula Mofarrej

@explica_ia

🤓🌎 Working with #AI @AI@Meta

ID: 1299145360405721089

calendar_today28-08-2020 00:42:55

283 Tweet

263 Takipçi

372 Takip Edilen

elvis (@omarsar0) 's Twitter Profile Photo

Another interesting short study. Finds that "Llama-2-70b is almost as strong at factuality as gpt-4, and considerably better than gpt-3.5-turbo." Need to take a closer at how evaluation is done but I already starting to see strong experimental results on Llama 2 for all kinds

Another interesting short study. 

Finds that "Llama-2-70b is almost as strong at factuality as gpt-4, and considerably better than gpt-3.5-turbo."

Need to take a closer at how evaluation is done but I already starting to see strong experimental results on Llama 2 for all kinds
AI at Meta (@aiatmeta) 's Twitter Profile Photo

We believe that AI models benefit from an open approach, both in terms of innovation and safety. Releasing models like Code Llama means the entire community can evaluate their capabilities, identify issues & fix vulnerabilities. github.com/facebookresear…

Ana Paula Mofarrej (@explica_ia) 's Twitter Profile Photo

💡 Deals to AI startups ticked up for the first time in 5 quarters to reach 590 in Q2’23. Over 40% of these went to US-based startups. 💛 Generative AI continues to draw investors’ attention, with four of the top five largest funding rounds this quarter going to genAI companies.

Nathan Lands — Lore.com (@nathanlands) 's Twitter Profile Photo

AI video tools like Runway and Pika have started to produce amazing results and will disrupt Hollywood. (PT29) Here are the top 10 AI videos this week:

Ana Paula Mofarrej (@explica_ia) 's Twitter Profile Photo

Modelo de IA simplificado: 1️⃣ [INPUT]: Entrada de Dados📥 Dê à IA informações - fotos, texto, áudio. Ela vai aprender com isso! 2️⃣ IA Aprendizado e Processamento🤓: A máquina processa e aprende com os dados. 3️⃣ [OUTPUT] Tomando Decisões🤖: A IA pode tomar decisões e insights!

Modelo de IA simplificado: 

1️⃣ [INPUT]: Entrada de Dados📥
Dê à IA informações - fotos, texto, áudio. Ela vai aprender com isso!

2️⃣ IA Aprendizado e Processamento🤓: A máquina processa e aprende com os dados.

3️⃣ [OUTPUT] Tomando Decisões🤖:
A IA pode tomar decisões e insights!
AI at Meta (@aiatmeta) 's Twitter Profile Photo

Last week we released FACET, a new comprehensive benchmark dataset for evaluating the fairness of models across a number of different vision tasks, constructed of 32K images from SA-1B, labeled by expert annotators. Read the paper ➡️ bit.ly/3EotZvg

Devi Parikh (@deviparikh) 's Twitter Profile Photo

Very excited to announce Emu Edit and Emu Video! Tell Emu Edit how you want an image edited and it will do precisely that. Tell Emu Video what you want to see and it will generate a high quality video. (Be sure to watch till the end!) Links to a bunch of examples + papers👇

Ishan Misra (@imisra_) 's Twitter Profile Photo

World meet #emuvideo For the past year, our team has been pushing on video generation. The result? Emu Video that generates high quality videos from text or images. SOTA performance vs. commercial products and academic papers. Check it out emu-video.metademolab.com

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Today we’re releasing V-JEPA, a method for teaching machines to understand and model the physical world by watching videos. This work is another important step towards Yann LeCun’s outlined vision of AI models that use a learned understanding of the world to plan, reason and

Yann LeCun (@ylecun) 's Twitter Profile Photo

V-JEPA: a step towards getting machines to understand how the world works by watching. The Joint embedding Predictive Architecture (JEPA) is a non-generative architecture that predicts the representation of a signal from a corrupted or transformed version of that signal. In