Big Data Finance (@bigdatafinance) 's Twitter Profile
Big Data Finance

@bigdatafinance

The original and the most rigorous exploration of the Big Data in Finance! bdf.bigdatafinance.org

ID: 1265715001

linkhttps://www.bigdatafinance.org calendar_today13-03-2013 23:30:35

910 Tweet

1,1K Takipçi

2,2K Takip Edilen

Itai Yanai (@itaiyanai) 's Twitter Profile Photo

The difference between doing a project and presenting it. An observation can lead to many avenues of explorations before focus turns to a specific discovery. Presenting it, in a talk / paper, follows inversely, with broad perspectives coming before & after the specific discovery.

The difference between doing a project and presenting it. An observation can lead to many avenues of explorations before focus turns to a specific discovery. Presenting it, in a talk / paper, follows inversely, with broad perspectives coming before & after the specific discovery.
Sharif Shameem (@sharifshameem) 's Twitter Profile Photo

Here are a few samples from the latest Lexica model. Will be live for everyone to play with in a few days. To beta test it, just reply here with a prompt.

Here are a few samples from the latest Lexica model. Will be live for everyone to play with in a few days.

To beta test it, just reply here with a prompt.
Itai Yanai (@itaiyanai) 's Twitter Profile Photo

no claim + no evidence = noise no claim + evidence = resource claim + no evidence = conjecture claim + evidence = scientific advance

Itai Yanai (@itaiyanai) 's Twitter Profile Photo

If feedback on your work is not important to you, then go over your allotted time when presenting it to an audience. And with no time left for questions, leave them relieved that the monolog is finally over.

Mert R. Sabuncu 🤖🩻⚕️ (@mertrory) 's Twitter Profile Photo

We propose a diffusion model that we pre-train with self-supervision with a hyper parameter value set to 0.3 to produce state-of-the-art results on ImageNet:

We propose a diffusion model that we pre-train with self-supervision with a hyper parameter value set to 0.3 to produce state-of-the-art results on ImageNet:
Irene Aldridge (@irenealdridge) 's Twitter Profile Photo

Happy to share our brand-new paper on using #AI in predicting #futures returns co-authored with a very creative out-of-the-box thinker, Cornell University University Financial Engineering Manhattan (orie.cornell.edu/orie/cfem) student Dan Robinson: papers.ssrn.com/sol3/papers.cf…

Aaron Levie (@levie) 's Twitter Profile Photo

We’re entering an era of AI-first software, which has the potential to create supercycle of opportunity and disruption in software not seen since mobile and cloud. This will be fun.

Itai Yanai (@itaiyanai) 's Twitter Profile Photo

A central problem in science today is that scientists are encouraged by the system to become just managers; meaning the vast majority of the scientific process is handled solely by grad students, postdocs, & research scientists without much input from the principal investigators.

Itai Yanai (@itaiyanai) 's Twitter Profile Photo

Here are my 12 guidelines for data exploration and analysis with the right attitude for discovery: 1. You never really finish analyzing a dataset. You just decide to stop and move on at some point, leaving some things undiscovered. 🧵

Alex Hormozi (@alexhormozi) 's Twitter Profile Photo

This is a frightening visual for me. The first dot is the amount of data Chat GPT 3 was trained on. The second is what chat GPT 4 is trained on. They are already doing demos. It can write a 60,000 word book from a single prompt. The only question I've had about AI…

This is a frightening visual for me. 

The first dot is the amount of data Chat GPT 3 was trained on.

The second is what chat GPT 4 is trained on. 

They are already doing demos. 

It can write a 60,000 word book from a single prompt. 

The only question I've had about AI…
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

🔥 New (1h56m) video lecture: "Let's build GPT: from scratch, in code, spelled out." youtube.com/watch?v=kCc8Fm… We build and train a Transformer following the "Attention Is All You Need" paper in the language modeling setting and end up with the core of nanoGPT.

🔥 New (1h56m) video lecture: "Let's build GPT: from scratch, in code, spelled out."
youtube.com/watch?v=kCc8Fm… 
We build and train a Transformer following the "Attention Is All You Need" paper in the language modeling setting and end up with the core of nanoGPT.
Jeff Dean (@jeffdean) 's Twitter Profile Photo

Excited to share the first of a series of Google AI blog posts summarizing our research work from 2022. This covers language & multimodal models, computer vision, and generative models. We'll have ~7 posts covering other areas over next few weeks! ai.googleblog.com/2023/01/google…

Excited to share the first of a series of <a href="/GoogleAI/">Google AI</a> blog posts summarizing our research work from 2022. This covers language &amp; multimodal models, computer vision, and generative models. We'll have ~7 posts covering other areas over next few weeks!

ai.googleblog.com/2023/01/google…