AI Engineer · Optometrist · Builder of Real-World Systems
I didn’t come into this from computer science in the traditional way. I started early — building mobile applications in high school, long before I formally stepped into AI. That foundation in software engineering, especially in mobile development, shaped how I think about building systems: practical, user-focused, and designed to actually be used.
I later studied optometry 👨🏾⚕️ and vision science, perception, how people interact with the world — and somewhere along the way, I started asking a different question:
what if we could build systems that don’t just see… but actually help people navigate reality?
So I kept building, just in a different direction.
My work sits at the intersection of AI, healthcare, and accessibility — not as ideas, but as systems that run in the real world.
One of the most defining projects I’ve built is Safe Step, my final year thesis. It’s an AI-powered navigation system for visually impaired users, built around a real-time computer vision pipeline using YOLOv8. But the model was never the point. The real challenge was translating perception into action, turning noisy visual data into simple, usable guidance. I designed it as a decision-support system, not just a detection model, and evaluated it with human participants. Under simulated blindness, users went from 0% task completion to 100% with the system.
I’ve also built EyeDxAi, an AI-assisted diagnostic tool that combines image understanding with structured reasoning to suggest possible eye conditions and next steps. That project pushed me into thinking about how models can support real decision-making, not just predictions.
A major turning point for me was joining Envision Technologies.
Working on systems used by over 100,000 blind and low vision users changed how I think about building software. It wasn’t about prototypes anymore. It was about reliability, usability, and designing systems that people depend on.
It was the first time I saw something I worked on exist outside of me — in the hands of real users, in real environments, solving real problems.
That changed everything.
I primarily work with Python, Dart, and JavaScript, building systems that span from machine learning pipelines to fully deployed mobile applications. My development process is centered around frameworks like Flutter for cross-platform mobile systems and TensorFlow and YOLO for real-time computer vision tasks. I rely on tools such as Git for version control, REST APIs for system integration, and Figma to think through user experience and interface design before implementation.
More broadly, my work focuses on computer vision and assistive technology, particularly in building mobile systems that go beyond prediction and function as decision-support tools. I’m especially interested in designing human-centered AI systems — systems that not only interpret data, but translate it into clear, usable actions for people in real-world environments.
I’m interested in systems that sit between:
- perception → understanding → decision → action
Especially in:
- assistive navigation
- real-time vision systems
- healthcare decision support
Outside of my core work, I enjoy following Formula One 🏎️ and occasionally exploring telemetry data to build small models and better understand performance dynamics. It’s a space where my interest in real-time systems, data, and decision-making naturally extends beyond healthcare into another high-performance domain.
I’m not interested in just building models.
I’m interested in what happens after the model runs.




