top of page

Research

My research interests lie in data-driven methods for the modeling and analysis of physical systems typically arising from problems in neuroscience or the life sciences. From a theoretical perspective, I am particularly interested in ways to connect objects/ classes/ structures revealed from data-analytic methods with objects/ classes/ structures classically studied in dynamical systems. Incorporating data-driven methods with mathematical techniques from dynamical systems can provide a more comprehensive and accurate understanding of complex systems and phenomena. It can help to improve the accuracy of mathematical models and to identify patterns and trends that might not be apparent from the data alone. From an applied perspective, I am interested in how these methods can help us understand cognition, perception, and learning. 

 

Working at the interface of data and dynamics has the added benefit of conducting research that produces (with some effort) pretty pictures to help tell the story of a research problem.

Data-driven dynamics of neural data

Interactions through several core mechanisms in the central and peripheral nervous system give rise to complicated nonlinear intrinsic dynamics. A key goal in computational neuroscience is to connect these intrinsic dynamics with theoretical principles as they relate to cognition and behavior.

 

Non-invasive (e.g., fMRI, EEG) and invasive (e.g., ECoG) recording techniques provide a window into the underlying neural data; however, these methods do not measure the entire suite of mechanisms contributing to complicated neural activity. Instead, they measure outputs of the underlying system.  In the case of fMRI, this corresponds to blood oxygen levels; for EEG and ECoG this corresponds to electrical activity in the brain. The underlying dynamics are potentially obfuscated by the constrained measurements. To unravel the intrinsic dynamics of neural data, I study and develop methods designed to connect neural measurements with their underlying principles.

​

During my Ph.D., we studied invasive neural recordings from human epileptic patients listening to ambiguous 5-minute stimuli. Throughout the auditory stimulus, subjects reported alternations in their perception of the unchanging stimulus by pressing a button on a screen.

​

With data-driven geometric and reduction methods, we identified features intrinsic to the neural data that encoded both the stimulus structure and the subjects' internal percept. The internal perception manifested as a low-dimensional manifold with almost-invariant regions corresponding to the reported percepts. These manifolds were stereotyped across multiple subjects. Our findings provide supporting neural evidence for attractor-based competition principles often used in computational models of perception.

​

The methods we proposed generalize across recording modality and are applicable when low-dimensional dynamics are thought to characterize an underlying neural (or insert your domain here) system. We are currently investigating ways to expand these methods to non-standard timeseries data like spike trains.

Identifying neuron activity from voltage data (spike sorting)

Neuroscience researchers use extracellular recordings to measure the activity of large neural populations. If the recordings are sampled fast enough (>20,000 Hz), the spike (action potentials) patterns of neurons appear in the measured voltage. Researchers are interested in the spike patterns of individual neurons over the duration of the recording, and thus a key component of a comprehensive analysis requires identifying individual neurons and their spike times from extracellular voltage. Since multiple neurons are likely to contribute to a given recording, different spikes identified in a voltage trace must be assigned to different neurons. This process is known as spike-sorting.

​

There is no standardized approach to spike-sorting. Some approaches rely on labor-intensive manual sorting of spikes using properties, like spike amplitude or spike width, to group the spikes into clusters. The clusters then represent individual neurons and the spikes within each cluster correspond to its spikes. More automated approaches use advanced automated clustering algorithms applied to low-dimensional projections of spike waveforms to similarly identify clusters (i.e., neurons).

​

Since spike measurements require fast-sampling, long recordings (days to weeks to months) over many recording sites (tens to hundreds to thousands) create massive files.(e.g., ~35 GB required for 75-minute recording over 256 sites). We are investigating an approach that only saves short-time shots of voltage near candidate spikes and the downsampled local field potential (LFP), which measures generalized population activity. Can the the LFP be incorporated between spikes to improve existing computational approaches?

Physics-based machine learning

Many computational models are derived from first principles underlying a given system. Within these models, there likely arise simplifying assumptions or nonlinear functions tuned to fit observed behavior. These assumptions or functions introduce a freedom of choice that researchers have when modeling systems. If these choices are based on heuristics (rather than first principles), they can instead be supplemented with machine learning components that shift the choices from the user to data-driven methodologies.

​

We studied numerical solutions to the 1D inviscid Euler equations in shock problems. When solved on a Lagrangian frame, the computed solutions admit spurious, non-physical oscillations near shocks. To augment these artifacts, a traditional approach augments the discretized Euler equations with an artificial term designed to improve the numerical solution. Since this term is artificial, and not derived from the physics of the problem, our approach was to replace it with a learnable function, an artificial neural network (ANN), trained to suppress oscillations near shocks.

​

Since the ANN was embedded in a numerical scheme, we used differentiable programming to compute the gradients necessary for network training. Since there is no 'true' artificial term to be learned, the ANN was trained according to its impacts on the computed numerical solutions.

​

This framework generalizes to other domains (beyond fluid dynamics), and we believe will be useful in instances when the governing equations for a system are largely known but can be improved with machine learning components tuned according to available data.

Fast-slow dynamics in neural systems

Dynamical systems with multiple timescales can be modeled as singularly perturbed ODEs. Multiscale dynamics often arise in neuroscience, with an action potential providing a simple example. The neuron spike initiates and terminates on a fast-time scale and it recovers on a slow-time scale. Geometrically, this gives rise to slow manifolds. The slow dynamics approximately occur along attracting portions of the manifold until escaping or jumping when a trajectory reaches a repelling portion.

​

Under certain parameter regimes, fast-slow systems admit canard solutions. These solutions remain close to repelling (an intuitively unexpected behavior!) portions of the slow manifold for a non-trivial time length before escaping. Such solutions provide mechanisms for mixed-mode oscillations observed in spiking neurons.

​

We consider a periodically forced FitzHugh Nagumo (FHN) model for a spiking neuron. The forcing is a smooth sinusoidal input to the voltage with parameters controlling its amplitude and frequency. For low-frequency forcing (on the order of the slow-time scale), we observe spiking behaviors that organize into Arnold tongue-like structures. Within these patterns, we observe the interactions of saddle-node and folded-node canards with a delayed-Hopf bifurcation giving rise to regimes with distinct spiking patterns.

​

The analytic and numeric techniques for studying fast-slow techniques include geometric singular perturbation theory, blow-up methods, averaging methods, and continuation. While the aforementioned FHN model is an example of a fast-slow system, the techniques used for its study are general and applicable in theoretical settings when multiple timescales are hypothesized to underly observed behaviors.

Input amplitude

Regions of distinct spiking behaviors

Input frequency

You can find a list of publications and presentations by following the link below. For a more detailed description of my research projects, see my research statement

IA_Swirl04.jpg
bottom of page