EquianoAI (@equianoai) 's Twitter Profile
EquianoAI

@equianoai

Equiano Institute | AI Safety & Governance for Africa.
A research lab by the people

ID: 1661135615637766145

linkhttps://www.equiano.institute/ calendar_today23-05-2023 22:23:02

112 Tweet

176 Followers

92 Following

Jonas (@jonas_kg) 's Twitter Profile Photo

We celebrated the anniversary of the Internet. We heard from some of the key architects of the Internet, including it's brilliant co-inventor, Vint Cerf and Alan Kay. In the morning, we explored hopes, fears, and expectations for the next 50 years of the Internet.

We celebrated the anniversary of the Internet. We heard from some of the key architects of the Internet, including it's brilliant co-inventor, Vint Cerf and Alan Kay. In the morning, we explored hopes, fears, and expectations for the next 50 years of the Internet.
Victor Akinwande (@aknvictor) 's Twitter Profile Photo

great to see new benchmarks, evaluation & progress on low-resource languages including Hausa and Yoruba - two Nigerian languages

EquianoAI (@equianoai) 's Twitter Profile Photo

We worked with OpenAI to evaluate GPT-4o in narrowing the performance gap for underrepresented languages, the results are shared in the GPT-4o System Card report. We'll be publishing a detailed paper with our findings in the coming months

We worked with <a href="/OpenAI/">OpenAI</a> to evaluate GPT-4o in narrowing the performance gap for underrepresented languages, the results are shared in the GPT-4o System Card report. We'll be publishing a detailed paper with our findings in the coming months
TIME (@time) 's Twitter Profile Photo

Why Chinasa T. Okolo, a Nigerian-American computer scientist and a Brookings Institution fellow, is one of the most influential people in AI ti.me/47mArRa

Dr. Chinasa T. Okolo (@chinasatokolo) 's Twitter Profile Photo

Delighted to be in Dakar at Deep Learning Indaba Deep Learning Indaba, the continent’s premier AI/ML conference! Today, I’ll be speaking at the “Empowering African Voices in AI: Data, Models, and Innovation” workshop: datasciencelawlab.africa/deep-learning-…

Delighted to be in Dakar at Deep Learning Indaba <a href="/DeepIndaba/">Deep Learning Indaba</a>, the continent’s premier AI/ML conference!

Today, I’ll be speaking at the “Empowering African Voices in AI: Data, Models, and Innovation” workshop: datasciencelawlab.africa/deep-learning-…
EquianoAI (@equianoai) 's Twitter Profile Photo

Congratulations to our Advisor Dr. Chinasa T. Okolo for being named one of TIME's 100 Most Influential People in AI! Learn more about her work in AI Governance.

EquianoAI (@equianoai) 's Twitter Profile Photo

Exploring Pluralistic Perspectives in AI 📣 Pluralistic Alignment @NeurIPS 2024 Workshop December 15, 2024 in Vancouver, Canada Pluralistic Alignment Workshop Submissions that discuss the technical, philosophical, and societal aspects of pluralistic AI are welcome pluralistic-alignment.github.io

Hear This Idea (@hearthisidea) 's Twitter Profile Photo

→ What changes once AI can automate R&D? → How close are bottlenecks from power, physical resources, and training data? → What (if anything) is most likely to prevent explosive growth from AI? Listen to Tamay Besiroglu — hearthisidea.com/episodes/besir…

EquianoAI (@equianoai) 's Twitter Profile Photo

We will be presenting on biologically explainable comorbidities in LLMs at the Explainable AI in Biology Conference 2024. We explore how LLMs can be exhibit biologically explainable outputs for comorbidities—conditions where patients experience two or more diseases simultaneously

We will be presenting on biologically explainable comorbidities in LLMs at the Explainable AI in Biology Conference 2024. We explore how LLMs can be exhibit biologically explainable outputs for comorbidities—conditions where patients experience two or more diseases simultaneously
EquianoAI (@equianoai) 's Twitter Profile Photo

Dr. Zheng-Xin Yong presents another great paper on low-resource languages with questions on synthetic data: (1) if LLM cannot speak the language, how do we use it to generate data? (2) can synthetic data be as good as manually collected data, esp for low-resource languages?

Oxford Generative AI Summit (@oxgenai) 's Twitter Profile Photo

We're delighted to have Jonas, Founder and Director, @equianoAI, as a speaker at the Oxford Generative AI Summit Summit! #OxGen24 Read more - oxgensummit.org/speakers2024/j… Get your tickets now at oxgensummit.org. #OxGen24

We're delighted to have <a href="/jonas_kg/">Jonas</a>, Founder and Director, @equianoAI, as a speaker at the <a href="/OxGenAI/">Oxford Generative AI Summit</a> Summit! #OxGen24

Read more - oxgensummit.org/speakers2024/j…

Get your tickets now at oxgensummit.org. #OxGen24
Institute of Politics (@harvardiop) 's Twitter Profile Photo

Audrey Tang (⿻ Audrey Tang 唐鳳), Megan Smith (Megan Smith -Archive), Danielle Allen (Danielle Allen), & Mathias Risse joined us in the Forum to explore how technology is being used to transform political institutions, civil society, & political culture to support democracy. 📹 ken.sc/forum0926-live

Audrey Tang (<a href="/audreyt/">⿻ Audrey Tang 唐鳳</a>), Megan Smith (<a href="/USCTO44/">Megan Smith -Archive</a>), Danielle Allen (<a href="/dsallentess/">Danielle Allen</a>), &amp; Mathias Risse joined us in the Forum to explore how technology is being used to transform political institutions, civil society, &amp; political culture to support democracy.

📹 ken.sc/forum0926-live
Plurality.Institute (@pluralityinst) 's Twitter Profile Photo

In their new paper, Mozilla argues that AI development shouldn't be driven solely by private companies. They introduce a framework for Public AI, which prioritizes public goods and an inclusive approach to AI development. Read the full paper: assets.mofoprod.net/network/docume… Mozilla

Clement Neo (@_clementneo) 's Twitter Profile Photo

🧠🖼️ New paper on interpreting VLMs!  We study Vision-Language Models (VLMs) like LLaVA to understand how they process objects in images. We find surprising insights about how these models identify objects in images and how their inner representations develop through the layers.

🧠🖼️ New paper on interpreting VLMs! 

We study Vision-Language Models (VLMs) like LLaVA to understand how they process objects in images. We find surprising insights about how these models identify objects in images and how their inner representations develop through the layers.
Jonas (@jonas_kg) 's Twitter Profile Photo

Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online arxiv.org/abs/2408.07892