graphmanic (@graphmanic) 's Twitter Profile
graphmanic

@graphmanic

Life is short. Use Python.

ID: 1802285907766054912

calendar_today16-06-2024 10:25:17

3,3K Tweet

28 Followers

50 Following

Chi Wang (@chi_wang_) 's Twitter Profile Photo

🚨 New AG2 Talk Alert! 🚨 Join us on Aug 28, 9 AM PST for "Maris: A Security Controlled Development Paradigm for Multi-Agent Collaboration Systems" by Jian Cui Jian Cui from UIUC & Berkeley AgentX competition winner! 🛡️ #AI #Cybersecurity RSVP now: discord.com/events/1153072…

🚨 New AG2 Talk Alert! 🚨 
Join us on Aug 28, 9 AM PST for 
"Maris: A Security Controlled Development Paradigm for Multi-Agent Collaboration Systems"
by <a href="/cuijian0819/">Jian Cui</a> Jian Cui from UIUC &amp; Berkeley AgentX competition winner! 🛡️ #AI #Cybersecurity

RSVP now: discord.com/events/1153072…
Daniel Jeffries (@dan_jeffries1) 's Twitter Profile Photo

How about don't listen to anyone who says anything like this because they actually have no clue how it will play out and just because they do something something AI does not mean they have any experience accurately predicting the future of civilization and the arc of tech

Andriy Burkov (@burkov) 's Twitter Profile Photo

Thanks to ideas like this, we can clearly see that Founder of Google's Generative AI Team can be replaced with an LLM. I think GPT-3.5 should be enough.

Thanks to ideas like this, we can clearly see that Founder of Google's Generative AI Team can be replaced with an LLM. I think GPT-3.5 should be enough.
Neo4j (@neo4j) 's Twitter Profile Photo

✔️ AI agents that are accurate, explainable, and ready for real-world use. Google’s MCP Toolbox now supports #Neo4j, making it easier to build GraphRAG-powered agents that query structured data with precision. In this live session, which is worth revisiting, Kurtis Van Gent

Gary Marcus (@garymarcus) 's Twitter Profile Photo

Gary Marcus has been proven wrong repeatedly, you say? like his prediction that GPT-5 would be underwhelming? or that it would be late? or that 2025 agents would be underwhelming and unreliable? or that GPT-5 would still hallucinate? or still make stupid errors? or that

François Chollet (@fchollet) 's Twitter Profile Photo

The proprietary frontier models of today are ephemeral artifacts. Essentially very expensive sandcastles. Destined to be washed away by the rising tide of open source replication (first) and algorithmic disruption (later).

Gary Marcus (@garymarcus) 's Twitter Profile Photo

Holy shit, even David Sacks - who built White House AI policy based on strong beliefs in scaling - has come around to saying what I had been saying for years. Scaling as we knew it is done.

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Continuing the journey of optimal LLM-assisted coding experience. In particular, I find that instead of narrowing in on a perfect one thing my usage is increasingly diversifying across a few workflows that I "stitch up" the pros/cons of: Personally the bread & butter (~75%?) of

Neo4j (@neo4j) 's Twitter Profile Photo

Pattern matching: the most straightforward and effective technique for solving real-world data problems.😮 Some use cases where you can apply pattern matching to your data: 👉Personalize Product Recommendations 👉Optimize Supply Chains 👉Detect Fraud (take a look at the fraud

Pattern matching: the most straightforward and effective technique for solving real-world data problems.😮 

Some use cases where you can apply pattern matching to your data:

👉Personalize Product Recommendations 
👉Optimize Supply Chains
👉Detect Fraud (take a look at the fraud
PyCharm, a JetBrains IDE (@pycharm) 's Twitter Profile Photo

Want to train a GPT model with your own data and deploy it fast? 🚀 With #HuggingFace Transformers in PyCharm, you can: ✔️ Browse and add models in your IDE ✔️ Fine-tune models with custom datasets ✔️ Deploy models via FastAPI See the step-by-step guide by Cheuk Ting Ho here:

Want to train a GPT model with your own data and deploy it fast? 🚀
With #HuggingFace Transformers in PyCharm, you can:

✔️ Browse and add models in your IDE
✔️ Fine-tune models with custom datasets
✔️ Deploy models via FastAPI 

See the step-by-step guide by <a href="/cheukting_ho/">Cheuk Ting Ho</a> here:
elvis (@omarsar0) 's Twitter Profile Photo

Fine-tuning LLM Agents without Fine-tuning LLMs Catchy title and very cool memory technique to improve deep research agents. Great for continuous, real-time learning without gradient updates. Here are my notes:

Fine-tuning LLM Agents without Fine-tuning LLMs

Catchy title and very cool memory technique to improve deep research agents.

Great for continuous, real-time learning without gradient updates.

Here are my notes:
elvis (@omarsar0) 's Twitter Profile Photo

Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains Graph-based retrieval is useful in lots of applications with complex data. This paper is a good example of the benefits:

Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains

Graph-based retrieval is useful in lots of applications with complex data.

This paper is a good example of the benefits:
Neo4j (@neo4j) 's Twitter Profile Photo

Ready to level up your RAG apps? 🚀 Dive into Tomaz Bratanic's tutorial to see how knowledge graphs with Neo4j and LangChain can make a difference. Explore more: bit.ly/41lMexf #Neo4j #RAG

Andrew Ng (@andrewyng) 's Twitter Profile Photo

Build better RAG by letting a team of agents extract and connect your reference materials into a knowledge graph. Our new short course, “Agentic Knowledge Graph Construction,” taught by @Neo4j Innovation Lead Andreas Kollegger, shows you how. Knowledge graphs are an important way to