UniflexAI (@uniflexai) 's Twitter Profile
UniflexAI

@uniflexai

TinyNav: A lightweight, hackable system to guide your robots anywhere.

ID: 1912328541863346176

linkhttps://github.com/UniflexAI/tinynav calendar_today16-04-2025 02:14:04

34 Tweet

14 Followers

31 Following

UniflexAI (@uniflexai) 's Twitter Profile Photo

Excited to share a video about the workflow of our project! From zero to navigation with TinyNav 🗺️➡️🤖 1️⃣ Build a map 2️⃣ Add POIs in the editor 3️⃣ Let your robots navigate, correctly react to obstacles, and find feasible paths Keep an eye on TinyNav — a ~3,000-line open-source

UniflexAI (@uniflexai) 's Twitter Profile Photo

saw completely new design of the head. Unitree is going to navigate the robots using stereo camera, just like our open source project tinynav github.com/UniflexAI/tiny…

UniflexAI (@uniflexai) 's Twitter Profile Photo

🚀 Introducing the TinyNav Bounty Program We’re rewarding the community for contributions that improve navigation, perception, and tooling in physical AI. 🧠 Bug reports 🧩 Code contributions 📝 Docs & tutorials 🎥 Demos & content Earn rewards while shaping the future of open

UniflexAI (@uniflexai) 's Twitter Profile Photo

Working on SLAM for years, I’m constantly asked: which method is faster? So I made a one-line command to benchmark solver speed with the standard dubrovnik/problem-16-22106-pre.txt input: `docker run --rm uniflexai/slambench:latest` Run it, share your result, and tell me if

Working on SLAM for years, I’m constantly asked: which method is faster?
So I made a one-line command to benchmark solver speed with the standard dubrovnik/problem-16-22106-pre.txt input:

`docker run --rm uniflexai/slambench:latest`

Run it, share your result, and tell me if
UniflexAI (@uniflexai) 's Twitter Profile Photo

I’ve updated github.com/dvorak0/slambe… with more benchmark results, including row-major vs column-major comparisons and GTSAM SmartFactor benchmarks. Added a summary screenshot below — check it out!

I’ve updated github.com/dvorak0/slambe… with more benchmark results, including row-major vs column-major comparisons and GTSAM SmartFactor benchmarks.
Added a summary screenshot below — check it out!
UniflexAI (@uniflexai) 's Twitter Profile Photo

Thanks for open-sourcing such great models. In our early test, Fast Foundation Stereo reached real-time performance (>13.5 FPS) with max disparity 64 on Jetson Orin NX — honestly much better than we expected. We’re planning to integrate it into our open-source navigation stack

Thanks for open-sourcing such great models. In our early test, Fast Foundation Stereo reached real-time performance (>13.5 FPS) with max disparity 64 on Jetson Orin NX — honestly much better than we expected.

We’re planning to integrate it into our open-source navigation stack
UniflexAI (@uniflexai) 's Twitter Profile Photo

Just updated it to include frontend benchmarks. The first version compares Harris + Optical Flow with ORB + Hamming matching. Keep watching: github.com/dvorak0/slambe… You can also easily reproduce the test with: docker run --rm --cpuset-cpus="0" uniflexai/slambench:latest

Just updated it to include frontend benchmarks. The first version compares Harris + Optical Flow with ORB + Hamming matching.

Keep watching: github.com/dvorak0/slambe…
You can also easily reproduce the test with:
docker run --rm --cpuset-cpus="0" uniflexai/slambench:latest
UniflexAI (@uniflexai) 's Twitter Profile Photo

If you know your hardware well enough, you can still write code that's faster than today's AI. I wrote a Harris corner detector kernel that's 2× faster than OpenCV.

If you know your hardware well enough, you can still write code that's faster than today's AI. I wrote a Harris corner detector kernel that's 2× faster than OpenCV.
UniflexAI (@uniflexai) 's Twitter Profile Photo

We expected FastFoundationStereo to be close to FoundationStereo. We were wrong. On Jetson Orin Nano: - FoundationStereo surprised us — lidar-quality point cloud - FastFoundationStereo is far worse (yes, it is indeed fast) - Retinify matches FastFoundationStereo's quality but is

UniflexAI (@uniflexai) 's Twitter Profile Photo

Tuned a faster Harris frontend kernel on an A55 ARM CPU @ 1.5GHz. For 752×480 EuRoC frames: • 59.49 ms → 21.51 ms • 2.77x faster Harris Same output: • 159 detected points • 139 tracked points Pipeline runtime: • 68.43 ms → 43.20 ms

Tuned a faster Harris frontend kernel on an A55 ARM CPU @ 1.5GHz.

For 752×480 EuRoC frames:

• 59.49 ms → 21.51 ms
• 2.77x faster Harris

Same output:
• 159 detected points
• 139 tracked points

Pipeline runtime:

• 68.43 ms → 43.20 ms
UniflexAI (@uniflexai) 's Twitter Profile Photo

Excited to share that TinyNav now has an app! We’re making robot navigation more accessible for everyone. More updates are on the way—stay tuned.