Open Positions: UMass Dartmouth undergraduate and graduate students interested in research opportunities should contact Scott.

The research group currently consists of one PhD student, two masters students, and two undergraduate students. I hope to describe some of their interesting projects and developments here soon!

Gravitational wave data science

Parameterized gravitational wave models, from the Einstein equation to closed-form post-Newtonian approximations, carry an evaluation cost. These costs constitute a major bottleneck for many important applications which can require thousands or millions of model evaluations. For example, binary blackhole sources are parameterized by 8 degrees of freedom; each orbiting blackhole has 1 mass and 3 spin degrees of freedom. In turn each simulation of the Einstein equation often requires weeks on large supercomputing clusters. On the other hand, repeated model evaluations are required by parameter estimation (i.e. inverse) problems. The high simulation cost currently prevents parameter estimation from being carried out with gravitational waveforms found by way of the Einstein equation. For upcoming gravitational wave experiments, parameter estimation studies are expected to take months for low-fidelity models, while being altogether impractical for more faithful models.

Markov chain Monte Carlo (MCMC) algorithms are a commonly used technique for sampling the posterior probability distribution function. When dealing with high dimensional problems, however, the process of mapping the posterior can become prohibitively expensive. Additionally, the posterior is multimodal and the gravitational wave signal is buried in detector noise, thereby rendering otherwise useful computational tricks ineffective. For example, millions of degrees of freedom characterize a typical advanced detector dataset, and equally long MCMC chains are necessary to compute parameter means and variances. In total, from model evaluations one should expect to generate around one Petabyte per analysis. Such data science challenges constitute a major computational bottleneck of the experiment and are among the most pressing questions in gravitational wave parameter estimation.

To overcome these challenges I have been involved in a long-term research program to develop and apply surrogate and reduced order modeling tools for both the rapid evaluation of gravitational waveforms and rapid parameter estimation using "compressed" likelihoods. These results rely upon identification of a low-dimensional representation for a given waveform family which in turn forms the building blocks for fast, scalable algorithms. For more information about this research please visit these websites on reduced order modeling for gravitational waves, reduced order quadratures for accelerated Bayesian inference and surrogate models.

Computational relativity with discontinuous Galerkin methods

To handle a diverse class of astrophysical sources in an efficient, robust and accurate manner, the development of advanced and flexible methods is paramount. Discontinuous Galerkin (DG) methods have the potential to offer unique benefits for a wide class of problems. We have developed DG methods for the first-order and second-order BSSN system (a numerically well-suited formulation of the Einstein equation) which achieve spectral accuracy (i.e. exponential convergence of numerical errors) and long-time stability. These implementations should provide a starting point for treating both fluid and spacetime variables with the same high-order accurate DG method.

In the context of extreme mass ratio (binary black hole) inspiral systems the solutions are forced by distributional (i.e. Dirac delta function) source terms. We proposed and implemented a DG scheme to exactly represent such distributional solutions through a modification of the relevant numerical flux terms. Furthermore, it was demonstrated that the method maintains spectral accuracy even at the location of the Dirac delta distribution. Accurate numerical modeling is crucial for the incorporation of important physical effects neglected by linearization such as the gravitational self-force.

Additional details can be found in my dissertation "Applications of Discontinuous Galerkin Methods to Computational General Relativity".

Asymptotic waveform evaluation

Time-domain wave simulation on a finite computational domain introduces an outer boundary beyond which the solution is unknown. Whether for acoustic, electromagnetic, or gravitational waves one often seeks to identify the asymptotic signal radiated to the far-field using only knowledge of the solution on this truncated, spatially finite, computational domain. To solve this challenge in the context of blackhole perturbation theory we analytically identify and numerically construct a kernel which, when convolved with data recorded at a fixed radial value, yields the asymptotic signal. We applied the construction to the linearized Einstein equation and compute far-field, scri-waveforms from arbitrarily short computational domains with better than 10 digits of accuracy. Further details on this method as well as the publicly available numerical kernels can be found on the kernel webpage.

Errors due to incorrect initial data specification

Physically appropriate initial data is often unknown. This is the case with binary black hole simulations and, in the linearized regime, trivial initial data is typically supplied. It is argued that this choice leads to a "burst" of junk radiation which eventually propagates off the computational domain after which only the influence of the source term is important. While perhaps intuitive, this explanation is somewhat incomplete. Using an accurate time-domain DG solver we were able to better understand systematic modeling errors driven by incorrect initial data. Some of the main observations from these works include: i) convergence to the analytic solution no longer follows rates suggested by the standard numerical analysis (Ref), ii) unphysical, static constraint violating Jost solutions may appear in some formulations (Ref), and iii) solutions which violate Huygen's principle are notably problematic for far-field wave computations where the late-time behavior is characterized by slower decaying Price tails (Ref).

To avoid these issues we proposed a simple smooth “switching on” of the source terms and diagnostics necessary to claim that a physically correct solution has been achieved. When including additional physical effects or performing high-accuracy comparisons between techniques, improved modeling will increasingly require the identification and reduction of all error sources and especially systematic numerical ones of the type listed above.