Computational neuroscientist. Systems engineer. Builder. Researcher.

I design and build end to end systems that span distributed hardware, real time software, and machine learning models to understand how intelligent agents adapt when conditions change. Trained as a neuroscientist at UCLA and shaped by years in manufacturing and engineering, I’m interested in building robust, flexible, and safe intelligent systems, in both academic and industry settings.

Ryan Grgurich

About

Where I Came From

I didn’t come to engineering and research through a straight academic path.

I grew up around precision manufacturing. My dad was a machinist who later moved into quality assurance, and as a kid I spent weekends in machine shops—checking parts, watching setups, learning what it meant for a system to work reliably in the real world. By high school, I was running parts on a lathe in the garage for extra money. There’s a little bit of oil and coolant in my blood.

That early exposure shaped how I think about technical work. In manufacturing, systems fail loudly if they’re unclear, fragile, or poorly designed. Documentation matters. Interfaces matter. Other people need to be able to use what you build.

Building Real Systems

I eventually ran operations at a precision machine shop for six years, working across CNC programming, process optimization, scheduling, and customer relationships. We made parts for aerospace, automotive, and industrial clients. The work left little room for hand waving.

When paper workflows became the bottleneck, I taught myself software development and built a custom ERP/MRP system from scratch—handling purchasing, inspection, job routing, and traceability. The system was adopted across the shop and later attracted interest as a commercial product.

That experience reinforced a lesson manufacturing had already taught me: if a system only works for its creator, it isn’t finished.

Science, Computation, and Distributed Platforms

During my time in the shop, I’d been drawn toward science and questions about intelligence and decision making. I went back to school, worked my way through, life science, physics, mathematics and computation courses, transferred into the Computational & Systems Biology program at UCLA, earned my B.S. and went on to earn my PhD in Computational and Behavioral Neuroscience.

During my PhD, that manufacturing informed engineering mindset came fully into focus. To study how brains make flexible decisions when environments change, I designed and built a distributed, closed loop research platform entirely from scratch. The platform centered on a fully automated maze experiment used to study adaptive navigational decision making. The system integrated custom hardware, real time control software, distributed compute nodes, and machine learning–based analysis. I built it as a reliable, extensible platform other researchers could use independently, not as a fragile prototype.

Alongside the experimental system, I built computational models and reinforcement learning environments to test how adaptive strategies emerge under different informational constraints, connecting biological decision making to artificial agents.

How I Work

A few principles I keep coming back to, whether I'm writing software, building hardware, or designing experiments:

Start with the question, not the tool.

The systems I’ve built exist because I needed specific capabilities, not because I wanted to use a particular technology. The distributed research platform grew out of concrete experimental questions about adaptive behavior. The computational models and reinforcement learning simulations were built to test hypotheses under controlled constraints. Tools are in service of understanding—when that relationship flips, you end up building things that are impressive but don’t actually answer anything.

Other people need to use what you build.

This is the lesson manufacturing drilled into me early. The machine shop only worked if processes were clear, documented, and usable by more than one person. That same principle has guided my work ever since. The research platforms I build are designed so others can run complex experiments independently. If a system only works for its creator, it isn’t finished.

Don't just fix the problem — prevent the next one.

In a machine shop, you don’t just remake a bad part; you figure out why the process produced it and put controls in place so it doesn’t happen again. That root cause mindset carries over directly to software and experimental systems. Reliable progress comes from understanding failure modes, making assumptions explicit, and designing systems that fail visibly when something is wrong rather than silently degrading.

Calibrate your tools to the job.

I use AI assisted development as a regular part of my workflow and consider it a genuine productivity multiplier. But its value depends on the context and the stakes. A quick exploratory analysis requires different oversight than a distributed control system running live experiments with real data on the line. I’ve shipped and maintained production systems long before these tools existed, and that experience is what allows me to evaluate, validate, and architect around AI generated output rather than treating it as a black box.

What I'm Looking For

I’m interested in applying this way of working to the design of intelligent systems that need to operate robustly, adapt to change, and interact safely with the real world, in both academic and industry settings.

Based in LA, open to remote.

Projects

Corner Maze — Distributed Behavioral Control Platform

A fully automated, closed-loop rodent navigation rig built from scratch — custom hardware, distributed control software, and a GUI that lets researchers run experiments without writing code. Still in active use at UCLA.

Research context

How does the brain make flexible decisions when the world shifts beneath it? My dissertation studied this through spatial navigation — a domain where you can precisely control what information an animal has access to and measure how it adapts when conditions change. The Corner Maze platform is the tool I built to run those experiments. No commercial system could do what I needed, so I designed one from scratch.

What it is

An automated behavioral neuroscience platform for studying spatial navigation and decision-making. The system runs experiments without human intervention — doors open and close, visual cues appear on monitors, rewards are delivered, and every event is logged with millisecond precision.

The hardware

I modeled the physical maze in Fusion 360 and produced technical drawings with GD&T tolerances for vendor fabrication. I designed all the custom circuitry controlling 12 linear actuators (doors) and 4 stepper-driven syringe pumps (reward delivery). The maze has 4 wall-mounted monitors for visual cue presentation and an overhead IR camera for real-time markerless position tracking.

The software architecture

The system is a multi-node distributed application spanning 5+ devices:

  • Master control node (Ubuntu workstation): ~8,200 lines of Python running a multi-threaded PyQt application. Handles session control, device orchestration, real-time video display, SQLite metadata storage, and synchronized event logging.
  • Camera node (Raspberry Pi): Multi-process Python application capturing 480x480 frames, running OpenCV-based markerless tracking (background subtraction, morphological filtering, contour detection, zone classification), and streaming frames over ZeroMQ.
  • Stimulus display nodes (2x Raspberry Pi): Pygame fullscreen applications receiving cue commands over ZeroMQ. Each Pi drives two monitors for four total stimulus displays.
  • Actuator controllers (16x Arduino): MODBUS RTU slaves controlling linear actuators with acceleration/deceleration ramping and EEPROM-based cycle counting for maintenance tracking.
  • Syringe pump controllers (4x Arduino): MODBUS RTU slaves driving stepper motors with step-forward-then-retreat motion to prevent liquid seepage between deliveries.
  • Light controller (1x Arduino): MODBUS RTU slave for PWM-controlled room and IR illumination.

Network topology: MODBUS RTU over RS-485 for all Arduino communication; ZeroMQ TCP for Raspberry Pi nodes (REQ/REP for control, PUB/SUB for video streaming).

Key design decisions

Protocol selection — MODBUS vs CAN bus: I evaluated both for the Arduino communication bus. MODBUS won because mature libraries existed for both Arduino and Python, and TTL-to-RS485 converters were significantly cheaper than CAN bus equivalents. Practical engineering trade-offs, not theoretical preference.

ZeroMQ for video and display control: Started with raw TCP for video streaming, hit reliability issues, discovered ZeroMQ and switched. Would have used ZeroMQ for the Arduinos too, but no good Arduino library existed and I didn't have time to write one.

GUI designed for multi-user operation: The PyQt interface wasn't built for me — it was built so other researchers could run experiments independently. The Action Vector Table is essentially a domain-specific session programming API: users parameterize entire experimental protocols (trial phases, zone triggers, cue configurations, reward delivery, performance criteria) through the interface without modifying code. Research assistants were trained to run sessions independently using step-by-step documentation I created.

Outcomes

The platform has been in continuous use for years. Other graduate students designed and ran their own experiments on it. One built her own experimental protocols using the session API and general system architecture. I ran 100+ animals through various experimental protocols, refining trial logic and quality control criteria across iterative pilots.

Corner Maze RL Simulation & Dual-Stream CNN Encoder

A virtual replica of the Corner Maze built as a reinforcement learning environment, with PPO agents trained to navigate it using synthetic visual input processed through a custom dual-stream CNN encoder.

Research context

If the flexible decision-making I observed in real animals depends on specific types of sensory information, could the same strategies emerge in a reinforcement learning agent given similar constraints? That's the question this project was built to answer. I created a virtual replica of the physical Corner Maze as a Gymnasium environment and trained PPO agents to navigate it — not as an exercise in RL engineering, but as a computational model of the biological behavior I was studying.

The simulation

Built on Gymnasium/MiniGrid with a configurable session framework that dynamically mirrors real experimental protocols. The environment generates analysis-ready trajectory and event data in the same format as the real behavioral data, enabling direct model-animal comparison.

Synthetic visual input pipeline

I modeled the maze environment in Fusion 360 and rendered dual left/right perspective views to approximate rodent visual input. Dataset engineering in PyTorch included preprocessing, deduplication of near-identical images across adjacent positions, and structured data storage for classification training.

CNN encoder design

I prototyped multiple single-stream CNN architectures, evaluating each with UMAP projections and cosine-similarity correlograms to assess how cleanly the learned embeddings separated spatial positions and orientations. The final dual-stream design emerged from this systematic evaluation — two input streams (left eye, right eye) feeding into a shared embedding space. The approach was principled model selection, not architectural innovation.

RL agents

Trained PPO agents using Stable-Baselines3 with state-dependent action masking to enforce physical constraints (can't walk through closed doors) and task constraints (must follow trial phase rules). Benchmarked trained agent policies against real animal trajectory data using standardized performance metrics.

An Uncertainty Principle for Neural Coding

How does the brain encode two things at once? We showed that neural populations embed position and velocity through separate information channels — firing rates and co-firing rates — subject to a fundamental trade-off analogous to the uncertainty principle in physics.

The question

A neuron's firing rate can encode where an animal is — but the brain also needs to know how fast it's moving and in what direction. How does a single population of neurons encode both position and velocity at the same time? And is there a cost to carrying both signals?

What we found

Neural populations carry two conjugate codes simultaneously. Individual firing rates (what we call the sigma channel) encode position — head direction, location on a track, spatial phase. But the timing relationships between neurons (the sigma-chi channel — co-firing rates across cell pairs) encode velocity. Increasing the precision of one channel degrades the other, analogous to the position-momentum uncertainty principle in physics.

This isn't a loose metaphor. The math formalizes the trade-off: the same spiking activity that gives you a clean position readout necessarily limits how much velocity information you can extract from the population's temporal structure, and vice versa.

My contributions

This project started as my capstone thesis in UCLA's Computational & Systems Biology major — a genuinely underappreciated program that used to be the cybernetics department. I built the computational groundwork:

  • Sigma-chi decoder: I implemented the core decoder framework under my advisor's guidance — the linear readout that separates position information (in firing rates) from velocity information (in co-firing rates) using exponential integration kernels and pseudoinverse regression.
  • Head direction to velocity decoding: I built the simulations showing that a ring of head direction cells encodes angular position in their firing rates and angular velocity in their co-firing patterns. Populations of 12-32 simulated neurons with von Mises tuning, Poisson spike generation, and temporal optimization across ±250 ms latency windows.
  • Grid cells to speed cells: I showed that the same principle applies to a different circuit — grid cells encode spatial position in their firing rates, and sigma-chi units computed from their co-firing rates behave as speed cells. This wasn't assumed from the head direction result; it had to be demonstrated with different tuning geometry and real behavioral speed data.

My advisor Tad Blair extended the framework to theta-modulated phase coding, where the conjugate relationship inverts — position moves into the co-firing channel and velocity into firing rates. That work completed the paper's argument that the uncertainty principle is a general property of neural population codes, not specific to one cell type.

Methods

All simulations in MATLAB. The pipeline runs from behavioral data (real head direction and position recordings from rats) through spike generation (von Mises-tuned Poisson and regular-interval generators), exponential decay kernels that convert spike trains to firing rates and co-firing rates, and linear decoders trained via pseudoinverse regression. Evaluation used circular distance metrics, latency-accuracy trade-off curves, and hold-out validation.

Why it matters

This work formalized something that had been intuited but never proven: that neural populations face a fundamental information-theoretic constraint when encoding multiple variables simultaneously. It connects to active questions in computational neuroscience about efficient coding, population geometry, and how the brain represents continuous variables — and to questions in AI about how artificial networks can encode multiple factors in shared representations without interference.

Publication

Grgurich, R. & Blair, H.T. (2020). An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co-firing rates of neural spike trains. Hippocampus, 30(4), 396-421. DOI: 10.1002/hipo.23197

IntervalsWellnessSync — iOS + watchOS Health Data App

A dual-platform Apple app that syncs HealthKit wellness data to the Intervals.icu training platform, featuring a custom overnight HRV capture pipeline on Apple Watch. Built with AI-assisted development. Currently in TestFlight.

What it is

An iOS and watchOS app that bridges Apple HealthKit with Intervals.icu, a training analysis platform popular with endurance athletes. It syncs 32 wellness metrics daily and includes a custom overnight heart rate variability (HRV) capture system that runs on Apple Watch during sleep.

Why I built it

I'm an avid road cyclist and I use Intervals.icu to track training load and recovery. The platform has a wellness feature, but there was no good way to automatically populate it with Apple Health data. I saw a gap, and I built the tool.

How I built it — AI-assisted development

This project is the clearest demonstration of how I use AI coding tools in practice. I don't write Swift — I directed the entire project using AI-assisted development, making all architectural and design decisions while the AI handled implementation in a language I hadn't written in before.

This isn't "I prompted an agent and shipped whatever it produced." I designed the service-oriented architecture, specified the OAuth 2.0 flow, defined the HRV pipeline stages based on sports science literature, identified the data model patterns, and debugged platform-level issues that required understanding what the code was actually doing. The AI wrote the Swift; I engineered the product.

Architecture

  • Service-oriented design with clear separation between SwiftUI views, singleton services, and value-type models
  • OAuth 2.0 authorization with a Cloudflare Worker handling server-side token exchange (client secret stays off-device, credentials stored in iOS Keychain)
  • SwiftData with App Group shared containers for cross-platform persistence between iOS and watchOS
  • Background sync via BGProcessingTask with smart sleep detection — the system checks HealthKit for recent sleep samples and reschedules if the user is still asleep

The HRV pipeline

The overnight HRV system uses HKWorkoutSession and HKHeartbeatSeriesBuilder to record raw beat-to-beat RR intervals during sleep. A duty-cycle manager alternates between 5-minute capture windows and rest periods tuned by sleep stage, balancing data quality against battery life.

Raw RR intervals go through a three-stage artifact correction pipeline: physiological range filtering (300-2000 ms), successive-difference ectopic beat removal using the Plews method, and IQR-based outlier rejection. Epoch-level quality gating uses established thresholds from sports science literature. The final nightly metric is median Ln(rMSSD) across valid epochs — a standard metric in athletic recovery monitoring.

Platform debugging war stories

  • Discovered an undocumented HealthKit constraint: writing heartbeat series data requires read permission for .heartRateVariabilitySDNN, and missing it causes an uncatchable Objective-C exception with no documentation trail.
  • Found that Cloudflare Worker's Response.redirect() silently rejects custom URL schemes — had to build manual response construction to complete the OAuth callback.
  • Replaced an unconstrained SwiftData query that was loading all records into memory with targeted fetch descriptors, eliminating memory issues during historical backfill.

Publications

Peer-Reviewed

An uncertainty principle for neural coding: Conjugate representations of position and velocity are mapped onto firing rates and co-firing rates of neural spike trains

R. Grgurich & H.T. Blair — Hippocampus, February 2019 — DOI: 10.1002/hipo.23197

We showed that position and velocity information are simultaneously encoded in neural populations through two separate channels — individual firing rates carry position, while correlated firing between neuron pairs carries velocity. Increasing the precision of one channel reduces accuracy in the other, analogous to the uncertainty principle in physics. I built the spike-train simulation and decoding frameworks (Poisson generators, exponential integration kernels, population vector readout) and ran all computational analyses.

Preprints / Forthcoming

Path Integration Promotes Flexible Decision Making During Navigation

R. Grgurich, S. Wang, J. Pimenta, K. Delafraz, H.T. Blair — forthcoming April 2026; preprint on bioRxiv

First-author paper from my dissertation. We showed that rats can form flexible spatial representations and rapidly adapt to changes (reversal learning, novel routes) when they have reliable self-motion cues, even without stable external landmarks. When the relationship between self-motion and landmarks is disrupted, flexible learning collapses. I built the maze platform, designed the experiments, collected all behavioral data, and performed the analyses.

Habit and the hippocampus: Model-based representations without outcome-sensitive control in spatial navigation

S. Wang, R. Grgurich, S. Dong, H.T. Blair — submitted March 2026; preprint on bioRxiv

Collaborative paper investigating the relationship between hippocampal representations and habitual versus goal-directed navigation strategies. I contributed the behavioral platform and experimental data collection.

Contact

Want to talk? I'm looking for roles where understanding how intelligent systems work — biological or artificial — drives what gets built.