News and Events
At the largest scales, galaxies cluster together in nodes connected by filaments, like a cosmic web. At those nodes reside galaxy clusters—the largest gravitationally bound structures in the universe. Their distribution—both in mass and in space—depends on the contents of our universe: its cosmology. By estimating cluster mass, we can then measure cosmological parameters, like the fraction of energy in our universe that is matter.
While nearby clusters can be weighed using velocity dispersion or gravitational lensing, we can only use proxies to estimate masses of distant clusters. Optical richness counts galaxies in a cluster: the count of red, quiescent galaxies scales with virialized cluster mass while the count of blue, star-forming galaxies scales with cluster accretion rate. Traditional richness estimators use hard cuts in color to exclude background galaxies along with the blue population, focusing only on the brightest quiescent members of the cluster for this mass proxy.
I use Red Dragon—a redshift-evolving Gaussian mixture model—to jointly characterize both red and blue galaxy populations. Red Dragon’s probabilistic modeling of the populations reduces systematic uncertainties in optical mass proxies, thereby improving cluster-based cosmological measurements. As Red Dragon is fully parameterized, it can be interpreted, revealing insights into cluster growth, virialization, and galaxy evolution in the cosmic web. This study of galaxy populations bridges astrophysics and cosmology, deepening our understanding of structure formation in the universe.
| Temp: | 44 °F | N2 Boiling: | 76.0 K |
| Humidity: | 57% | H2O Boiling: | 368.6 K |
| Pressure: | 86 kPa | Sunrise: | 7:02 AM |
| Wind: | 1 m/s | Sunset: | 6:17 PM |
| Precip: | 0 mm | Sunlight: | 0 W/m² |
Selected Publications
The use of audible sound for acoustic excitation is commonly employed to assess and monitor structural health, as well as to replicate the acoustic environmental conditions that a structure might experience in use. Achieving the required amplitude and specified spectral shape is essential to meet industry standards. This study aims to implement a sound focusing method called time reversal (TR) to achieve higher amplitude levels compared to simply broadcasting noise. The paper seeks to understand the spatial dependence of focusing long-duration noise signals using TR to increase the spatial extent of the focus. Both one- and two-dimensional measurements are performed and analyzed using TR with noise, alongside traditional noise broadcasting without TR. The variables explored include the density of foci for a given length/area, the density of foci for varying length with a fixed number of foci, and the frequency content and bandwidth of the noise. A use case scenario is presented that utilizes a single-point focus with an upper frequency limit to maintain the desired spectral shape while achieving higher focusing amplitudes.
A general method for designing proteins with high conformational specificity is desirable for a variety of applications, including enzyme design and drug target redesign. To assess the ability of algorithms to design for conformational specificity, we introduce MotifDiv, a benchmark dataset of 200 conformational specificity design challenges. We also introduce CSDesign, an algorithm for designing proteins with high preference for a target conformation over an alternate conformation. On the MotifDiv benchmark, CSDesign designs protein sequences that are predicted to prefer the target conformation. We apply this method in vitro to redesign human MAP kinase ERK2, an enzyme with active and inactive conformations. Out of two designs for the active conformation, one increased activity sufficiently to retain activity in the absence of activating phosphorylations, a property not present in the wild type protein.
This paper presents the first study comparing the spectra of a lab-scale afterburning rig operating at a relevant total temperature ratios value of
6, typical of Full-Scale (FS) afterburning jets, against Tam's similarity model. The spectral characteristics of FS afterburning jets were successfully reproduced on a lab-scale. Far-field acoustic data at 63 diameters relative to the nozzle exit were used to fit the similarity spectra, with a priority placed on achieving the best fit for the overall shape of the measured spectra while ensuring a smooth growth or decay of the peak frequencies. The transition region, which is delineated by a narrow range of microphone locations from 90° to 107.5°, required a combination of fine-scale similarity spectra (FSS) and large-scale similarity spectra (LSS) to better model both the peaks and roll-offs of the measured spectra. Only LSS was needed to model the spectra near the region of maximum overall sound pressure level radiation, whereas sideline angles only needed FSS. The similarity model was unable to accurately predict the double peaks observed at select angles. Additionally, a mismatch in the high-frequency slope between the similarity model and the measured spectra became apparent outside the region of peak radiation.
A central problem in data science is to use potentially noisy samples of an unknown function to predict function values for unseen inputs. In classical statistics, the predictive error is understood as a trade-off between the bias and the variance that balances model simplicity with its ability to fit complex functions. However, overparametrized models exhibit counterintuitive behaviors, such as “double descent” in which models of increasing complexity exhibit decreasing generalization error. Other models may exhibit more complicated patterns of predictive error with multiple peaks and valleys. Neither double descent nor multiple descent phenomena are well explained by the bias-variance decomposition. We introduce a decomposition that we call the generalized aliasing decomposition (GAD) to explain the relationship between predictive performance and model complexity. The GAD decomposes the predictive error into three parts: (1) model insufficiency, which dominates when the number of parameters is much smaller than the number of data points, (2) data insufficiency, which dominates when the number of parameters is much greater than the number of data points, and (3) generalized aliasing, which dominates between these two extremes. We demonstrate the applicability of the GAD to diverse applications, including random feature models from machine learning, Fourier transforms from signal processing, solution methods for differential equations, and predictive formation enthalpy in materials discovery. Because key components of the generalized aliasing decomposition can be explicitly calculated from the relationship between model class and samples without seeing any data labels, it can answer questions related to experimental design and model selection before collecting data or performing experiments. We further demonstrate this approach on several examples and discuss implications for predictive modeling and data science.
We report on roughly 16 yr of photometric monitoring of the trans-Neptunian binary system (120347) Salacia–Actaea, which provides significant evidence that Salacia and Actaea are tidally locked to the mutual orbital period in a fully synchronous configuration. The orbit of Actaea is updated, followed by a Lomb–Scargle periodogram analysis of the ground-based photometry, which reveals a synodic period similar to the orbital period and a peak-to-peak lightcurve amplitude of Δm = 0.0900 ± 0.0036 mag (1σ uncertainty). Incorporating archival Hubble Space Telescope photometry that resolves each component, we argue that the periodicity in the unresolved data is driven by a longitudinally varying surface morphology on Salacia, and we derive a sidereal rotation period that is within 1σ of the mutual orbital period. A rudimentary tidal evolution model is invoked that suggests synchronization occurred within 1.1 Gyr after Actaea was captured/formed.
In response to the 2020 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) Feasibility Study was launched as an international collaboration hosted by CERN. This report describes the FCC integrated programme, which consists of two stages: an electron-positron collider (FCC-ee) in the first phase, serving as a high-luminosity Higgs, top, and electroweak factory; followed by a proton-proton collider (FCC-hh) at the energy frontier in the second phase. The FCC-ee is designed to operate at four key centre-of-mass energies: the Z pole, the WW pair production threshold, the ZH production peak, and the top/anti-top production threshold—each delivering the highest possible luminosities to four experiments. Over 15 years of operation, FCC-ee will produce more than 6 trillion Z bosons, 200 million WW pairs, nearly 3 million Higgs bosons, and 2 million top anti-top pairs. Precise energy calibration at the Z pole and WW threshold will be achieved through frequent resonant depolarisation of pilot bunches. The sequence of operation modes between the Z, WW, and ZH substages remains flexible. The FCC-hh will operate at a centre-of-mass energy of approximately 85 TeV—nearly an order of magnitude higher than the LHC—and is designed to deliver 5 to 10 times the integrated luminosity of the upcoming High-Luminosity LHC. Its mass reach for direct discovery extends to several tens of TeV. In addition to proton-proton collisions, the FCC-hh is capable of supporting ion-ion, ion-proton, and lepton-hadron collision modes. This second volume of the Feasibility Study Report presents the complete design of the FCC-ee collider, its operation and staging strategy, the full-energy booster and injector complex, required accelerator technologies, safety concepts, and technical infrastructure. It also includes the design of the FCC-hh hadron collider, development of high-field magnets, hadron injector options, and key technical systems for FCC-hh.