Abstract

Materials discovery is bottle necked by limited computational resources. One common method of computing a materials properties, Density Functional Theory (DFT), requires that the electronic band structure of a materials to be integrated. The electronic band structure of metals is very difficult to perform because of the discontinuities introduced by the Fermi level. Currently the most commonly used integration technique is a simple Riemann sum. Using this method requires very dense sampling because the integral converges slowly. Significant effort has gone into finding integration techniques that converge more quickly. New integration techniques that rely on non-uniform sampling have been proposed, but are not able to take full advantage of conventional symmetry reduction techniques. To ensure optimal symmetry reduction, calculations should be performed in the symmetrically irreducible Brillouin zone (IBZ). We present an algorithm for finding the IBZ for an arbitrary lattice.

Abstract

One important part of density-functional theory (DFT) calculations is the numerical integral of the electronic band structure. Unfortunately, this critical step of DFT simulation is the most computationally expensive, because each $k$-point (sampling point) requires solving a large eigenproblem. For metals, almost all of the error in the band energy integral comes from misrepresenting the Fermi surface, so the most important part of any integration technique is approximating the Fermi surface correctly. Current DFT codes approximate the bands by sampling the bands with a uniform mesh, and using each sampling point to perform a zeroth-order interpolation, approximating the area around each sampling point as a constant function. The integration of the approximated bands is therefore reduced to simple Riemann sums. This zeroth-order interpolation represents the bands very poorly, making an accurate approximation of the Fermi surface impossible. I present an integration scheme consisting of the quadratic interpolation of the electronic bands using Bezier triangles. The Fermi energy can then be continuously varied in order to best represent the Fermi surface, and thereby achieve the same accuracy with fewer $k$-points. I also explore further improvement by using an adaptive mesh refinement technique in those integration regions which contain the Fermi surface. Preliminary results suggest that 1 meV accuracy can be achieved using \textasciitilde$10\times$ fewer $k$-points.

Parker Hamilton (Senior Thesis, April 2020,
Advisor: Gus Hart
)

Abstract

Superalloys are a vital material in our technological infrastructure because of their high operating temperatures. Ni-based superalloys are used often in turbines for engines and energy production because they exhibit an FCC $\gamma$ phase with a $\gamma^{\,\prime}$ phase precipitate that reinforces the lattice structure and maintains a high mechanical strength at elevated operating temperatures. Co-based superalloys do not exhibit this same phase, but they are highly corrosion resistant and generally have a longer operating lifespan than Ni-based superalloys as a result. Experimental work has shown ternary Co-based superalloys with a metastable $\gamma$-$\gamma^{\,\prime}$ phase, but it separated into other phases during a heating process. Density functional theory (DFT) allows for the energy of an atomic configuration to be calculated from first principles, but DFT calculations are inherently performed at 0 K. We use a method called nested sampling, along with a machine learned interatomic potential, to derive a high temperature phase behaviour of a ternary Co-Al-W alloy. The nested sampling method overcomes the sampling problem of being stuck at local minima. The machine learned interatomic potential, a moment tensor potential, is trained on DFT calculations, leveraging the accuracy of first principle calculations. The nested sampling method uses this interatomic potential to sample the energy of configurations in the Co-Al-W system to approximate the thermodynamic partition function. Heat capacity can be derived from the partition function and can then be used to find phase transition temperatures. These transition temperatures, across many compositions, can then be used to build a full phase diagram.

Abstract

Grain Boundaries (GBs), the interfaces between individual crystals in metals, influence many of the physical properties observed in metals such as corrosion, electrical conductivity, and strength. I look to map the metastable states, states that are stable but not at the lowest energy, of specific GB subsets where the macroscopic parameters are kept constant. By using machine learning, I am able to cluster these GB subsets to possibly find the unique metastable states. Applying this technique to 1797 Σ5-(012) symmetric twist GBs, I found the optimal number of clusters was based on the representation of the GB that was used. While these clusters cannot be proven to correspond to the metastable states, analyzing the clusters based on Principal Component Analysis and energy gives confidence that they do. With knowledge of these metastable states, material design and GB engineering, the deliberate manipulation of GBs to improve properties, can be improved.

Wiley Spencer Morgan (PhD Dissertation, April 2019,
Advisor: Gus Hart
)

Abstract

[Abstract]

Abstract

Predicting new materials through virtually screening a large number of hypothetical materials using supercomputers has enabled materials discovery at an accelerated pace. However, the innumerable number of possible hypothetical materials necessitates the development of faster computational methods for speedier screening of materials reducing the time of discovery. In this thesis, I aim to understand and apply two computational methods for materials prediction. The first method deals with a computational high-throughput study of superalloys. Superalloys are materials which exhibit high-temperature strength. A combinatorial high-throughput search across 2224 ternary alloy systems revealed 102 potential superalloys of which 37 are brand new, all of which we patented. The second computational method deals with a machine-learning (ML) approach and aims at understanding the consistency among five different state-of-the-art machine-learning models in predicting the formation enthalpy of 10 different binary alloys. The study revealed that although the five different ML models approach the problem uniquely, their predictions are consistent with each other and that they are all capable of predicting multiple materials simultaneously.My contribution to both the projects included conceiving the idea, performing calculations, interpreting the results, and writing significant portions of the two journal articles published related to each project. A follow-up work of both computational approaches, their impact, and future outlook of materials prediction are also presented.

Abstract

Steel is an incredibly valuable, versatile material. Unfortunately, high-strength steels are vulnerable to hydrogen embrittlement, a process that describes the degradation of a crystalline- structured material when too much hydrogen is absorbed. When enough hydrogen builds up, it can lead to early and unexpected failure of the material, which is both costly and dangerous. Recent decades have seen a surge of efforts to solve this problem, but a general, viable solution has yet to be found. In this paper, we continue a new method using machine learning techniques in conjunction with atomic environment representations to predict global properties based on local atomic positions. Steel is comprised mostly of the base element iron. The defects in the iron crystal structure are where hydrogen prefers to adsorb. By developing a technique that will allow us to understand the global properties in these areas, future research will lead to predicting where the hydrogen will adsorb so that we can find another element that will non-deleteriously adsorb to those same sites, thus blocking the hydrogen and preventing hydrogen embrittlement. This methodology can further be applied to any crystalline material, allowing engineers to understand the basic building blocks of what gives a material its properties. Its application will help improve the versatility of materials manufacturing, allowing manufacturers to precisely design a material with whatever properties a customer desires, enhance the properties of existing materials, and stabilize materials that so far only exist in theory.

Jake Hansen (Capstone, July 2017,
Advisor: Gus Hart
)

Abstract

In computational materials science, identifying new stable phases is a primary strategy for developing new materials. Most nickel based superalloys are shown to have the so-called phase which allows for precipitate hardening to occur. This hardening is what allows superalloys to have good mechanical strength at high temperatures. We have developed a framework that automatically generates convex hulls for ternary intermetallic systems. This framework allows us to examine candidate ternary metallic alloys against existing materials science data, effectively letting us search part of materials space for new superalloys. Using this framework, we examined 2224 systems in which we identified 37 potential superalloys that have not been reported in experimental literature. These superalloys are shown to have better properties than candidates proposed in experimental literature. High performance computing has the potential to revolutionize the way that materials science is done, hastening the discovery of new materials. New materials such as the 37 new superalloys discussed have the potential to revolutionize the aviation and power generation industries by enabling the creation of more efficient engines.

Abstract

For centuries, scientists have dreamed of creating materials by design. Rather than discovery by accident, bespoke materials could be tailored to fulfill specific technological needs. Quantum theory and computational methods are essentially equal to the task, and computational power is the new bottleneck. Machine learning has the potential to solve that problem by approximating material behavior at multiple length scales. A full end-to-end solution must allow us to approximate the quantum mechanics, microstructure and engineering tasks well enough to be predictive in the real world. In this dissertation, I present algorithms and methodology to address some of these problems at various length scales. In the realm of enumeration, systems with many degrees of freedom such as high-entropy alloys may contain prohibitively many unique possibilities so that enumerating all of them would exhaust available compute memory. One possible way to address this problem is to know in advance how many possibilities there are so that the user can reduce their search space by restricting the occupation of certain lattice sites. Although tools to calculate this number were available, none performed well for very large systems and none could easily be integrated into low-level languages for use in existing scientific codes. I present an algorithm to solve these problems. Testing the robustness of machine-learned models is an essential component in any materials discovery or optimization application. While it is customary to perform a small number of system-specific tests to validate an approach, this may be insufficient in many cases. In particular, for Cluster Expansion models, the expansion may not converge quickly enough to be useful and reliable. Although the method has been used for decades, a rigorous investigation across many systems to determine when CE "breaks" was still lacking. This dissertation includes this investigation along with heuristics that use only a small training database to predict whether a model is worth pursuing in detail. To be useful, computational materials discovery must lead to experimental validation. However, experiments are difficult due to sample purity, environmental effects and a host of other considerations. In many cases, it is difficult to connect theory to experiment because computation is deterministic. By combining advanced group theory with machine learning, we created a new tool that bridges the gap between experiment and theory so that experimental and computed phase diagrams can be harmonized. Grain boundaries in real materials control many important material properties such as corrosion, thermal conductivity, and creep. Because of their high dimensionality, learning the underlying physics to optimizing grain boundaries is extremely complex. By leveraging a mathematically rigorous representation for local atomic environments, machine learning becomes a powerful tool to approximate properties for grain boundaries. But it also goes beyond predicting properties by highlighting those atomic environments that are most important for influencing the boundary properties. This provides an immense dimensionality reduction that empowers grain boundary scientists to know where to look for deeper physical insights.

Matt Burbidge (Senior Thesis, April 2016,
Advisor: Gus Hart
)

Abstract

The 1998 Nobel prize was given to Kohn and Pople for their development of Density Functional Theory. DFT has been developed into a powerful tool that allows one to do quantum-mechanical calculations. Typical DFT calculations require a numerical integral over the occupied electron states in the material. Even though this integral is a small piece of the overall calculation, it is a primary source of error. Through the use of a simple toy problem, we will explain the fundamentals of the integration problem. We will introduce some of the attempts at resolving it and explore their effectiveness in current DFT codes as well as our own attempts. The resolution of this integration problem for metals will result in millions of CPU hours saved for a typical computational materials scientist.

Abstract

High-throughput alloy simulations can greatly increase the rate at which we discover and synthesize new materials by giving narrower focus and clearer direction to physical materials experimentation. In working towards a comprehensive database of potential alloys and their predicted characteristics, we are seeking ways to increase the computational efficiency of our simulations. One main opportunity for improvement is in calculating the energy contribution from electron bands. Determining this energy contribution requires numerically integrating over the occupied regions of the electron bands. For metals in particular, dense sampling of the electron bands is required to achieve sufficient accuracy in the integral (due to the lack of smoothness in the partially filled electron bands of metals). Each sample point requires solving a large eigenvalue problem, leading to longer computation time for denser sampling. This thesis describes attempts to interpolate the electron bands using trigonometric star functions and splines to achieve necessary accuracy with sparser sampling. The findings I present here show that the interpolation methods we have employed do not represent the bands well enough to be used to reduce sampling of the electron bands.

Abstract

A new algorithm for the enumeration of derivative superstructures of a crystal is presented. The algorithm will help increase the efficiency of computational material design methods such as cluster expansion by increasing the size and diversity of the types of systems that can be modeled. Modeling potential alloys requires the exploration of all possible configurations of atoms. Additionally, modeling the thermal properties of materials requires knowledge of the possible ways of displacing the atoms. One solution to finding all symmetrically unique configurations and displacements is to generate the complete list of possible configurations and remove those that are symmetrically equivalent. This approach, however, suffers from the combinatoric explosion that happens when the supercell size is large, when there are more than two atom types, or when atomic displacements are included in the system. The combinatoric explosion is a problem because the large number of possible arrangements makes finding the relatively small number of unique arrangements for these systems impractical. The algorithm presented here is an extension of an existing algorithm [Hart & Forcade (2008a), Hart & Forcade (2009a), Hart et al. (2012a) Hart, Nelson, & Forcade] to include the extra configurational degree of freedom from the inclusion of displacement directions. The algorithm makes use of another recently developed algorithm for the Pólya [Pólya & Read (1987), Pólya (1937), Rosenbrock et al.(2015) Rosenbrock, Morgan, Hart, Curtarolo, & Forcade] counting theorem to inform the user of the total number of unique arrangements before performing the enumeration and to ensure that the list of unique arrangements will fit in system memory. The algorithm also uses group theory to eliminate large classes of arrangements rather than eliminating arrangements one by one. The three major topics of this paper will be presented in this order, first the Pólya algorithm, second the new algorithm for eliminating duplicate structures, and third the algorithms extension to include displacement directions. With these tools, it is possible to avoid the combinatoric explosion and enumerate previously inaccessible systems, including those that contain displaced atoms.

Derek Ostrom (Senior Thesis, April 2016,
Advisor: Gus Hart
)

Abstract

The Wang-Landau algorithm is a relatively new Monte Carlo method and its applications are still being explored. One such application, discussed here, is in cluster expansion calculations. Current Monte Carlo algorithms used in cluster expansions are slow to converge and the Wang- Landau algorithm looks like a faster alternative. We tested the algorithm on simple Ising models and toy 2D binary alloys and then tested it for the first time on a real metal alloy system, AgPd, in UNCLE, the cluster expansion code. I compared the time of simulation and the specific heat graphs produced from the Wang-Landau algorithm with those produced from the Metropolis algorithm on these three models. I found that the specific heat results matched well for the toy cases and for the AgPd case. There was also a significant increase in the speed of the simulation for some runs. I present results along with possible problems for the algorithm’s application in UNCLE and future work to be done.

Abstract

The hardness of platinum and palladium alloys can be signicantly improved by precipitate hardening. One application of this is in Pt/Pd jewelery alloys where only small amounts of the alloying agent may be added (less than 5 wt.-%). For these alloys, one needs to identify platinum- and palladium-rich ordered phases that will form precipitates in nearly pure alloys. Using first principles calculations, we identiﬁed 22 systems where a platinum- or palladium-rich phase (prototype Pt8 Ti) is stable but has not yet been observed. In the case of Pt-Mo, we constructed a cluster expansion and predicted the order-disorder transition temperature. Using our results as a guide, further experimental work may well turn up additional elements that will be useful for precipitate hardening in Pt-rich and Pd-rich alloys.

Abstract

The steady march of new technology depends crucially on our ability to discover and design new, advanced materials. Partially due to increases in computing power, computational methods are now having an increased role in this disovery process. Advances in this area speed the discovery and development of advanced materials by guiding experimental work down fruitful paths. Density functional theory (DFT) has proven to be a highly accurate tool for computing material properties. However, due to its computational cost and complexity, DFT is unsuited to performing exhaustive searches over many candidate materials or for extracting thermodynamic information. To perform these types of searches requires that we construct a fast, yet accurate model. One model commonly used in materials science is the cluster expansion, which can compute the energy, or another relevant physical property, of millions of derivative superstructures quickly and accurately. This model has been used in materials research for many years with great success. Currently the construction of a cluster expansion model presents several noteworthy challenges. While these challenges have obviously not prevented the method from being useful, addressing them will result in a big payoff in speed and accuracy. Two of the most glaring challenges encountered when constructing a cluster expansion model include: (i) determining which of the infinite number of clusters to include in the expansion, and (ii) deciding which atomic configurations to use for training data. Compressive sensing, a recently-developed technique in the signal processing community, is uniquely suited to address both of these challenges. Compressive sensing (CS) allows essentially all possible basis (cluster) functions to be included in the analysis and offers a specific recipe for choosing atomic configurations to be used for training data. We show that cluster expansion models constructed using CS predict more accurately than current state-of-the art methods, require little user intervention during the construction process, and are orders-of-magnitude faster than current methods. A Bayesian implementation of CS is found to be even faster than the typical constrained optimization approach, is free of any user-optimized parameters, and naturally produces error bars on the predictions made. The speed and hands-off nature of Bayesian compressive sensing (BCS) makes it a valuable tool for automatically constructing models for many different materials. Combining BCS with high-throughput data sets of binary alloy data, we automatically construct CE models for all binary alloy systems. This work represents a major stride in materials science and advanced materials development.

Abstract

In this innovative and original technique, water flux across a lipid bilayer membrane is measured as a function of changing membrane capacitance. The capacitance of a lipid membrane can be directly measured with a high impedance amplifier as the current resulting from an applied dV/dt. The characteristic capacitance of a membrane is proportional to surface area. Since the membrane is fixed on a sealed pipette, a change in capacitance (surface area) implies a change in volume (e.g., bulging). The solution in the pipette will expand if heated, and thermal expansion of the solution will cause the membrane to bulge. Upon bulging to approximately 50-75% of its original (non-bulged) capacitance, a chemical gradient is added to the solution bath. The addition of a gradient induces osmosis, causing water to flow from the less concentrated pipette solution into the more concentrated solution bath. As water flows, the membrane resumes its normal, flat state, the change in capacitance can be found as a function of time. Using this data we can extrapolate the flux of water across the membrane.

Abstract

Stable structures were determined in three binary metallic systems--palladium/copper, palladium/magnesium, and palladium/niobium--using computational programs based on Schrodinger's equation and basic thermodynamics. These programs determined the formation enthalpies of each combinatorially-possible structure, ignoring the effects of temperature. This investigation only looked at structures up to 12 atoms large. Monte Carlo methods were used to determine the approximate phase transition temperature---the temperature at which it will become thermodynamically stable for the atoms to order rather than be randomly placed---for several of the anticipated stable structures. Structures with high transition temperatures are more likely to be experimentally feasible. Information about the predicted stable structures and their respective transition temperatures will guide the work of experimentalists who develop these alloys.

Erin Gilmartin (Senior Thesis, April 2010,
Advisor: Gus Hart
)

Abstract

Abstract

The utility of first-principles methods in the study and prediction of binary alloys is showcased by three detailed studies. In particular, the T = 0 K cluster expansion methodology in conjunction with finite temperature statistical modeling by a Monte Carlo method is used to study two systems of practical interest, Mg-Li (magnesium-lithium) and Rh-W (rhodium-tungsten). Also, an empirically-informed, high-throughput approach to crystal structure prediction is shown by a study of the Pt8Ti (the Pietrokowsky phase) phase and a broad and detailed analysis of binary Mg-X phases in 39 systems (X=Ag, Al, Au, Ca, Cd, Cu, Fe, Ga, Ge, Hf, Hg, In, Ir, K, La, Li, Pb, Pd, Pt, Mo, Na, Nb, Os, Rb, Re, Rh, Ru, Sc, Si, Sn, Sr, Ta, Tc, Ti, V, W, Y, Zn, Zr). These results are presented in the form of three publications (the first two are in print, and the third is nearing submission) co-authored with Gus Hart and Stefano Curtarolo.

Abstract

Many structures have an underlying motif such as an fcc, bcc, or hcp parent lattice with di erent chemical orderings on the lattice. Among seemingly in nite possibili- ties for these orderings, why does nature only choose the few that it does? Purpose To predict new simple cubic and perovskite structures which can be observed experi- mentally. Method Using a combinatorial approach, generate all unique binary simple cubic structures with 2 to 8 atoms in the unit cell. Calculate the likelihood that each of these structures can be observed in nature, and plot their likelihood as a function of the structure's concentration. Through this list we will be able to predict new structures. Results Through this method we have been able to generate a list which is ordered by a structure's likelihood. We know this because observed structures tend to have a higher calculated likelihood than non-observed structures at a given concentration. Using this information we now have predictions for new simple cubic structures which we can also apply to new perovskites.

Abstract

Recent developments in the field of biophysics, both in findings and methods, have consequences that extend not only into physics in general, but may have application in a rigorous mathematical analysis of financial markets. Specifically, we apply the interpretative power of the Detrended Fluctuation Analysis to an Exchange Traded index Fund (ETF) mirroring the S&P 500. Not only do we verify the observation of positive long-range correlations, but we also characterize the effects of bin size on the DFA output. As a final application, we briefly examine the possibilities of using the results of a localized DFA to assess a measure of corporate health.

Abstract

The behavior of coupled chaotic systems is not well known. We study the be- haviors of two coupled logistic maps. We use three couplings to study the be- havior, a master-slave coupling, a symmetric coupling and a variable coupling. We develop methods to study the correlations by looking at the bifurcation diagrams, scatter plots and cobweb plots. With weak couplings correlations are seen. We determine that with strong couplings the two maps completely synchronize.

Abstract

We have used molecular dynamics simulations to investigate the rotational diffusion and hydration of Laurdan (2-dimethylamino-6-lauroylnaphthalene) in liquid and gel dipalmitoylphosphatidylcholine bilayers at temperatures above and below the phase transition. Laurdan is a fluorescent dye commonly used in biophysical experiments to detect ordered regions in lipid bilayers through changes in the polarization and wavelength of emitted light. Correlation between the autocorrelation of the laurdan rotation and experimental observations of the decay of anisotropy of the emitted light and between hydration and shift in fluorescence wavelengths is demonstrated.

Abstract

Chaotic systems are frequently encountered in nature. Their continued dis- covery imparts relevance to the eort being made to understand the behavior of chaotic dynamics. The wide occurrence of chaos in nature shows that the chaos found in many simple mathematical models is not a trite ancillary math- ematical eect but adumbrates a profound natural phenomenon. The logistic map (LM) is frequently cited as one such simple model capable of exhibit- ing chaotic behavior and is used widely as a pedagogical tool. It provides a proverbial stepping stone toward expanding our understanding of the path dy- namical systems take toward chaos. Also, the 2-D LM serves as a convenient tool for studying the synchronization behavior encountered frequently in cou- pled chaotic systems. Complete synchronization can be shown to occur in the coupled logistic map for certain values of the coupling constant. Furthermore, intermittent synchronization precedes the onset of complete synchronization. Analytic techniques can be used to precisely determine the onset of complete synchronization. A general analytic method, however, for predicting regions of synchronization remains to be found.

Abstract

Phase separation is the process when a homogeneous mixture separates into two (or more) different phases. This process is found in systems such as binary alloys where it affects properties such as hardness. We study domain growth tendencies of a phase-separating binary alloy using a two-dimensional, spin-exchange Ising model. The model obeys the asymptotic domain growth law (of the form R / t 1 3 , where R is the size of the domain and t is time) when the model is allowed to respond to compressive forces. However, when compressibility and different atomic sizes are introduced into the model, the domain growth law is violated and we observe qualitatively different domain growth patterns. The exponent in the growth law is found to be less than 0.27 for the simulations with different atomic sizes. This violation indicates a lack in the current theory of domain growth and suggests fundamental properties of precipitate hardening processes in alloys are still not understood.