Research advances noise cancelling for quantum computers

A team from Dartmouth College and MIT has designed and conducted the first lab test to successfully detect and characterize a class of complex, “non-Gaussian” noise processes that are routinely encountered in superconducting quantum computing systems.

The characterization of non-Gaussian noise in superconducting quantum bits is a critical step toward making these systems more precise.

The joint study, published in Nature Communications, could help accelerate the realization of quantum computing systems. The experiment was based on earlier theoretical research conducted at Dartmouth and published in Physical Review Letters in 2016.

“This is the first concrete step toward trying to characterize more complicated types of noise processes than commonly assumed in the quantum domain,” said Lorenza Viola, a professor of physics at Dartmouth who led the 2016 study as well as the theory component of the present work. “As qubit coherence properties are being constantly improved, it is important to detect non-Gaussian noise in order to build the most precise quantum systems possible.”

Quantum computers differ from traditional computers by going beyond the binary “on-off” sequencing favored by classical physics. Quantum computers rely on quantum bits — also known as qubits — that are built out of atomic and subatomic particles.

Essentially, qubits can be placed in a combination of both “on” and “off” positions at the same time. They can also be “entangled,” meaning that the properties of one qubit can influence another over a distance.

Superconducting qubit systems are considered one of the leading contenders in the race to build scalable, high-performing quantum computers. But, like other qubit platforms, they are highly sensitive to their environment and can be affected by both external noise and internal noise.

External noise in quantum computing systems could come from control electronics or stray magnetic fields. Internal noise could come from other uncontrolled quantum systems such as material impurities. The ability to reduce noise is a major focus in the development of quantum computers.

“The big barrier preventing us from having large-scale quantum computers now is this noise issue.” said Leigh Norris, a postdoctoral associate at Dartmouth that co-authored the study. “This research moves us toward understanding the noise, which is a step toward cancelling it, and hopefully having a reliable quantum computer one day.”

Unwanted noise is often described in terms of simple “Gaussian” models, in which the probability distribution of the random fluctuations of noise creates a familiar, bell-shaped Gaussian curve. Non-Gaussian noise is harder to describe and detect because it falls outside the range of validity of these assumptions and because there may simply be less of it.

Whenever the statistical properties of noise are Gaussian, a small amount of information can be used to characterize the noise — namely, the correlations at only two distinct times, or equivalently, in terms of a frequency-domain description, the so-called “noise spectrum.”

Thanks to their high sensitivity to the surrounding environment, qubits can be used as sensors of their own noise. Building on this idea, researchers have made progress in developing techniques for identifying and reducing Gaussian noise in quantum systems, similar to how noise-cancelling headphones work.

While not as common as Gaussian noise, identifying and cancelling non-Gaussian noise is an equally important challenge toward optimally designing quantum systems.

Non-Gaussian noise is distinguished by more complicated patterns of correlations that involve multiple points in time. As a result, much more information about the noise is required in order for it to be identified.

In the study, researchers were able to approximate characteristics of non-Gaussian noise using information about correlations at three different times, corresponding to what is known as the “bispectrum” in the frequency domain.

“This is the first time that a detailed, frequency-resolved characterization of non-Gaussian noise has been able to be done in a lab with qubits. This result significantly expands the toolbox that we have available for doing accurate noise characterization and therefore crafting better and more stable qubits in quantum computers,” said Viola.

A quantum computer that cannot sense non-Gaussian noise could be easily confused between the quantum signal it is supposed to process and unwanted noise in the system. Protocols for achieving non-Gaussian noise spectroscopy did not exist until the Dartmouth study in 2016.

While the MIT experiment to validate the protocol won’t immediately make large-scale quantum computers practically viable, it is a major step toward making them more precise.

“This research started on the white board. We didn’t know if someone was going to be able to put it into practice, but despite significant conceptual and experimental challenges, the MIT team did it,” said Felix Beaudoin, a former Dartmouth postdoctoral student in Viola’s group who also played an instrumental role in bridging between theory and experiment in the study.

“It’s been an absolute joy to collaborate with Lorenza Viola and her fantastic theory team at Dartmouth,” said William Oliver, a professor of physics at MIT. “We’ve been working together for years now on several projects and, as quantum computing transitions from scientific curiosity to technical reality, I anticipate the need for more such interdisciplinary and interinstitutional collaboration.”

According to the research team, there are still years of additional work required in order to perfect the detection and cancellation of noise in quantum systems. In particular, future research will move from a single-sensor system to a two-sensor system, enabling the characterization of noise correlations across different qubits.

African American bachelor’s degrees see growth, behind in physical sciences, engineering

African Americans are seeing growth in many engineering and physical sciences fields, but they are not progressing at the same rate when compared to the general population.

A report from the American Institute of Physics (AIP) Statistical Research Center (SRC) examined the number of bachelor’s degrees earned from 2005 to 2015 and separated out the numbers for African Americans from the rest of the students. The data was gathered by the National Center for Education Statistics from postsecondary institutions in the United States.

The SRC found the number of degrees earned by African Americans in physical sciences fields grew by 36% over 10-year period, which was less than the growth of degrees by all students, 55%, during the same time.

In four of the seven physical sciences fields, the number of degrees earned by African Americans grew faster (by percentage) than the growth overall, but those fields were among the smallest number of degrees earned. The other fields, which had larger numbers of graduates, showed a slower than overall growth rate.

In engineering, the number of bachelor’s degrees earned by African Americans increased by 19%, less than half of the overall growth in the field of 44%.

Only two fields in engineering (civil engineering and materials engineering) showed growth in the number of African American graduates when compared to the rest of the students in those fields. The other seven disciplines showed slow or negative growth.

SRC senior survey scientist Laura Merner said it was heartening to see growth for African Americans overall in the science and engineering fields, but it is not fast enough.

“We’re hopeful that this report could help intervention programs to be more successful to improve representation,” Merner said. “Clearly, more research is needed to find out why African Americans are underrepresented in these fields, and there is still work that needs to be done.”

The number of African Americans earning bachelor’s degrees in the physical sciences and engineering has grown during the 10-year period from just under 6,000 degrees earned in 2005 to more than 7,000 degrees earned in 2015. While more African Americans earned degrees in 2015 than in 2005 in the physical sciences, for engineering, the number for men earning degrees showed an increase while the number of women earning degrees decreased.

Story Source:

Materials provided by American Institute of Physics. Note: Content may be edited for style and length.

Mathematicians develop new statistical indicator

Mathematicians develop new statistical indicator statistics, science, nevin manimala
Mathematicians develop new statistical indicator statistics, science, nevin manimala

Most of us know this phenomenon only too well: as soon as it is hot outside, you get an appetite for a cooling ice cream. But would you have thought that mathematics could be involved?

Let us explain: The rising temperatures and the rising ice consumption are two statistical variables in linear dependence; they are correlated.

In statistics, correlations are important for predicting the future behaviour of variables. Such scientific forecasts are frequently requested by the media, be it for football or election results.

To measure linear dependence, scientists use the so-called correlation coefficient, which was first introduced by the British natural scientist Sir Francis Galton (1822-1911) in the 1870s. Shortly afterwards, the mathematician Karl Pearson provided a formal mathematical justification for the correlation coefficient. Therefore, mathematicians also speak of the “Pearson product-moment correlation” or the “Pearson correlation.”

If, however, the dependence between the variables is non-linear, the correlation coefficient is no longer a suitable measure for their dependence.

René Schilling, Professor of Probability at TU Dresden, emphasises: “Up to now, it has taken a great deal of computational effort to detect dependencies between more than two high-dimensional variables, in particular when complicated non-linear relationships are involved. We have now found an efficient and practical solution to this problem.”

Dr. Björn Böttcher, Prof. Martin Keller-Ressel and Prof. René Schilling from TU Dresden’s Institute of Mathematical Stochastics have developed a dependence measure called “distance multivariance.” The definition of this new measure and the underlying mathematical theory were published in the leading international journal Annals of Statistics under the title “Distance Multivariance: New

Dependence Measures for Random Vectors.”

Martin Keller-Ressel explains: “To calculate the dependence measure, not only the values of the observed variables themselves, but also their mutual distances are recorded and from these distance matrices, the distance multivariance is calculated. This intermediate step allows for the detection of complex dependencies, which the usual correlation coefficient would simply ignore. Our method can be applied to questions in bioinformatics, where big data sets need to be analysed.”

In a follow-up study, it was shown that the classical correlation coefficient and other known dependence measures can be regained as borderline cases from the distance multivariance.

Björn Böttcher concludes by pointing out: “We provide all necessary functions in the package ‘multivariance’ for the free statistics software ‘R’, so that all interested parties can test the application of the new dependence measure.”

Story Source:

Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length.

Top tools for pinpointing genetic drivers of disease

Published in Nature Communications, the study is the largest of its kind and was led by Walter and Eliza Hall Institute computational biologists Professor Tony Papenfuss, Dr Daniel Cameron and Mr Leon Di Stefano.

The new study reveals the world’s top genomic rearrangement detection tools providing summaries on their performance and recommendations for use. Dr Cameron said the study could ultimately help clinicians determine the best treatments for their patients.

“Basically, you have to understand what is going wrong before you can work out how to fix the problem. In the context of cancer for instance, an understanding of the genetic mutations driving tumour growth could help oncologists determine the most appropriate treatment for their patients,” he said.

To determine the best genomic rearrangement detection methods, the researchers comprehensively tested 12 of the most widely used tools to see which ones could accurately identify the differences between a patient’s genetic information and the standard human reference genome. The findings revealed that a tool called GRIDSS, developed by Professor Papenfuss and Dr Cameron, was one of the best performing options, most accurately able to detect DNA rearrangements.

Dr Cameron said the study would not have been possible without the Institute’s high-performance computing resource.

“Over the course of two years, we tested 12 of the most popular genomic rearrangement detection tools, generating more than 50 terabytes of data, to determine which tools perform well, and when they perform badly. Without these computing resources, we estimate the study would have taken us more than ten years,” he said.

The Institute’s Theme Leader for Computational Biology Professor Papenfuss said computational methods were required, more than ever before, for making sense of vast and complex datasets being generated from research.

“Computational studies like this one keep the field up to date with best practice approaches for data analysis. This particular study provides a comprehensive resource for users of genomic rearrangement detection methods, as well as developers in the field. It will also help to direct the next iteration of genomic rearrangement tool development at the Institute,” he said.

As new experimental techniques and DNA sequencing machines become available, the very nature of the data they generate is changing. Professor Papenfuss said that older analysis tools, while heavily cited and widely used, could lead to erroneous interpretations if used on data produced by the latest DNA sequencing machines. “This is why it is so important for researchers to find the right match between the analysis tool and dataset at hand,” he said.

Story Source:

Materials provided by Walter and Eliza Hall Institute. Note: Content may be edited for style and length.

Optimal models of thermodynamic properties

Argonne team combines cutting-edge modeling with 300-year-old statistical analysis technique to enhance material properties.

At some point in your life, you’ve probably had somebody — a parent, a teacher, a mentor — tell you that “the more you practice, the better you become.” The expression is often attributed to Thomas Bayes, an 18th century British minister who was interested in winning at games and formalized this simple observation into a now-famous mathematical expression.

Used to examine behaviors, properties and other mechanisms that constitute a concept or phenomenon, Bayesian analysis employs an array of varied, but similar, data to statistically inform an optimal model of that concept or phenomenon.

“Simply put, Bayesian statistics is a way of starting with our best current understanding and then updating that with new data from experiments or simulations to come up with a better-informed understanding,” said Noah Paulson, a computational materials scientist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

The method met with some success over the 300 years since its inception, but it is an idea whose time has finally arrived.

In some fields, like cosmology, researchers have been successfully developing and sharing Bayesian techniques and codes for some time. In others, like materials science, implementation of Bayesian analysis methods is just beginning to pay dividends.

“Simply put, Bayesian statistics is a way of defining something we already understand and then updating that with new data from experiments or simulations to come up with a more accurate understanding.” — Noah Paulson, computational materials scientist, Argonne National Laboratory

Paulson and several Argonne colleagues are applying Bayesian methods to quantify uncertainties in the thermodynamic properties of materials. In other words, they want to determine how much confidence they can place in the data they collect about materials and the mathematical models used to represent those data.

While the statistical techniques are applicable to many fields, the researchers set out to create an optimal model of the thermodynamic properties of hafnium (Hf), a metal emerging as a key component in computer electronics. Results derived from this approach will be published published in the September 2019 issue of the International Journal of Engineering Science.

“We found that we didn’t know all that we could about this material because there were so many datasets and so much conflicting information. So we performed this Bayesian analysis to propose a model that the community can embrace and use in research and application,” said Marius Stan, who leads intelligent materials design in Argonne’s Applied Materials division (AMD) and is a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

To derive an optimal model of a material’s thermodynamic properties, researchers use some prior knowledge or data related to the subject matter as a starting point.

In this case, the team was looking to define the best models for the enthalpy (the amount of energy in a material) and the specific heat (the heat necessary to increase the temperature of the unit mass of the material by one degree Celsius) of hafnium. Represented as equations and mathematical expressions, the models have different parameters that control them. The goal is to find the optimal parameters.

“We had to start with a guess of what those parameters should be,” said Paulson of AMD’s Thermal and Structural Materials group. “Looking through the literature we found some ranges and values that made sense, so we used those for our prior distribution.”

One of the parameters the researchers explored is the temperature of a crystal’s highest normal mode of vibration. Referred to as the Einstein or Debye temperature, this parameter affects a material’s specific heat.

The prior — or initial — guess is based on existing models, preliminary data or the intuition of experts in the field. Using calibration data from experiments or simulation, Bayesian statistics update that prior knowledge and determine the posterior — the updated understanding of the model. The Bayesian framework can then determine whether new data are in better or worse agreement with the model being tested.

“Like cosmology, materials science must find the optimal model and parameter values that best explain the data and then determine the uncertainties related to these parameters. There’s not much point in having a best-fit parameter value without an error bar,” said team member Elise Jennings, a computational scientist in statistics with the Argonne Leadership Computing Facility (ALCF), a DOEOffice of Science User Facility, and an associate of the Kavli Institute for Cosmological Physics at the University of Chicago.

And that, she said, is the biggest challenge for materials science: a lack of error bars or uncertainties noted in available datasets. The hafnium research, for example, relied on datasets selected from previously published papers, but error ranges were either absent or excluded.

So, in addition to presenting models for the specific thermodynamic properties of hafnium, the article also explores techniques by which materials science and other fields of study can make allowances for datasets that don’t have uncertainties.

“For a scientist or an engineer, this is an important problem,” said Stan. “We’re presenting a better way of evaluating how valuable our information is. We want to know how much trust we can put in the models and the data. And this work reveals a methodology, a better way of evaluating that.”

A paper based on the study, “Bayesian strategies for uncertainty quantification of the thermodynamic properties of materials,” is available online (June 13) and will appear in the September 2019 edition of the International Journal of Engineering Science. Noah Paulson, Elise Jennings and Marius Stan collaborated on the research.

This study is supported through the CHiMaD Program, funded by the National Institute for Standards and Technology (NIST).

Understanding brain activity when you name what you see

You see an object, you think of its name and then you say it. This apparently simple activity engages a set of brain regions that must interact with each other to produce the behavior quickly and accurately. A report published in eNeuro shows that a reliable sequence of neural interactions occurs in the human brain that corresponds to the visual processing stage, the language state when we think of the name, and finally the articulation state when we say the name. The study reveals that the neural processing does not involve just a sequence of different brain regions, but instead it engages a sequence of changing interactions between those brain regions.

“In this study, we worked with patients with epilepsy whose brain activity was being recorded with electrodes to find where their seizures started. While the electrodes were in place, we showed the patients pictures and asked them to name them while we recorded their brain activity,” said co-corresponding author Dr. Xaq Pitkow, assistant professor of neuroscience and McNair Scholar at Baylor College of Medicine and assistant professor of electrical and computer engineering at Rice University.

“We then analyzed the data we recorded and derived a new level of understanding of how the brain network comes up with the right word and enables us to say that word,” said Dr. Nitin Tandon, professor in the Vivian L. Smith Department of Neurosurgery at McGovern Medical School at The University of Texas Health Science Center at Houston.

The researchers’ findings support the view that when a person names a picture, the different behavioral stages — looking at the image, thinking of the name and saying it — consistently correspond to dynamic interactions within neural networks.

“Before our findings, the typical view was that separate brain areas would be activated in sequence,” Pitkow said. “But we used more complex statistical methods and fast measurement methods, and found more interesting brain dynamics.”

“This methodological advance provides a template by which to assess other complex neural processes, as well as to explain disorders of language production,” Tandon said.

Story Source:

Materials provided by Baylor College of Medicine. Note: Content may be edited for style and length.

Decoding Beethoven’s music style using data science

Decoding Beethoven's music style using data science statistics, science, nevin manimala
Decoding Beethoven's music style using data science statistics, science, nevin manimala

EPFL researchers are investigating Beethoven’s composition style and they are using statistical techniques to quantify and explore the patterns that characterize musical structures in the Western classical tradition. They confirm what is expected against the backdrop of music theory for the classical music era, but go beyond a music theoretical approach by statistically characterizing the musical language of Beethoven for the very first time. Their study is based on the set of compositions known as the Beethoven String Quartets and the results are published in PLOS ONE on June 6th, 2019.

“New state-of-the-art methods in statistics and data science make it possible for us to analyze music in ways that were out of reach for traditional musicology. The young field of Digital Musicology is currently advancing a whole new range of methods and perspectives,” says Martin Rohrmeier who leads EPFL’s Digital and Cognitive Musicology Lab (DCML) in the College of Humanities’ Digital Humanities Institute. “The aim of our lab is to understand how music works.”

The Beethoven String Quartets refer to 16 quartets encompassing 70 single movements that Beethoven composed throughout his lifetime. He completed his first String Quartet composition at the turn of the 19th century when he was almost 30 years old, and the last in 1826 shortly before his death. A string quartet is a musical ensemble of four musicians playing string instruments: two violins, the viola, and the cello.

From music analysis to big data

For the study Rohrmeier and colleagues plowed through the scores of all 16 of Beethoven’s String Quartets in digital and annotated form. The most time-consuming part of the work has been to generate the dataset based on ten thousands of annotations by music theoretical experts.

“We essentially generated a large digital resource from Beethoven’s music scores to look for patterns,” says Fabian C. Moss, first author of the PLOS ONE study.

When played, the String Quartets represent over 8 hours of music. The scores themselves contain almost 30,000 chord annotations. A chord is a set of notes that sound at the same time, and a note corresponds to a pitch.

In music analysis, chords can be classified according to the role they play in the musical piece. Two well-known types of chords are called the dominant and the tonic, which have central roles for the build-up of tension and release and for establishing musical phrases. But there is a large number of types of chords, including many variants of the dominant and tonic chords. The Beethoven String Quartets contain over 1000 different types of these chords.

“Our approach exemplifies the growing research field of digital humanities, in which data science methods and digital technologies are used to advance our understanding of real-world sources, such as literary texts, music or paintings, under new digital perspectives,” explains co-author Markus Neuwirth.

Beethoven’s statistical signature

Beethoven’s creative choices are now apparent through the filter of statistical analysis, thanks to this new data set generated by the researchers.

The study finds that very few chords govern most of the music, a phenomenon that is also known in linguistics, where very few words dominate language corpora. As expected from music theory on music from the classical period, the study shows that the compositions are particularly dominated by the dominant and tonic chords and their many variants. Also, the most frequent transition from one chord to the next happens from the dominant to the tonic. The researchers also found that chords strongly select for their order and, thus, define the direction of musical time. But the statistical methodology reveals more. It characterizes Beethoven’s specific composition style for the String Quartets, through a distribution of all the chords he used, how often they occur, and how they commonly transition from one to the other. In other words, it captures Beethoven’s composition style with a statistical signature.

“This is just the beginning,” explains Moss. “We are continuing our work by extending the datasets to cover a broad range of composers and historical periods, and invite other researchers to join our search for the statistical basis of the inner workings of music.”

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Hillary Sanctuary. Note: Content may be edited for style and length.

From viruses to social bots, researchers unearth the structure of attacked networks

The human body’s mechanisms are marvelous, yet they haven’t given up all their secrets. In order to truly conquer human disease, it is crucial to understand what happens at the most elementary level.

Essential functions of the cell are carried out by protein molecules, which interact with each other in varying complexity. When a virus enters the body, it disrupts their interactions and manipulates them for its own replication. This is the foundation of genetic diseases, and it is of great interest to understand how viruses operate.

Adversaries like viruses inspired Paul Bogdan, associate professor in the Ming Hsieh Department of Electrical and Computer Engineering, and recent Ph.D. graduate, Yuankun Xue, from USC’s Cyber Physical Systems Group, to determine how exactly they interact with proteins in the human body. “We tried to reproduce this problem using a mathematical model,” said Bogdan. Their groundbreaking statistical machine learning research on “Reconstructing missing complex networks against adversarial interventions,” was published in Nature Communications journal earlier this April.

Xue, who earned his Ph.D. in electrical and computer engineering last year with the 2018 Best Dissertation Award, said: “Understanding the invisible networks of critical proteins and genes is challenging, and extremely important to design new medicines or gene therapies against viruses and even diseases like cancer.”

The ‘protein interaction network’ models each protein as a ‘node.’ If two proteins interact, there is an ‘edge’ connecting them. Xue explained, “An attack by a virus is analogous to removing certain nodes and links in this network.” Consequently, the original network is no longer observable.

“Some networks are highly dynamic. The speed at which they change may be extremely fast or slow,” Bogdan said. “We may not have sensors to get accurate measurements. Part of the network cannot be observed and hence becomes invisible.”

To trace the effect of a viral attack, Bogdan and Xue needed to reconstruct the original network by finding a reliable estimate of the invisible part, which was no easy task. Said Bogdan: “The challenge is that you don’t see the links, you don’t see the nodes, and you don’t know the behavior of the virus.” To solve this problem, Xue added, “The trick is to rely on a statistical machine learning framework to trace all possibilities and find the most probable estimate.”

In sharp contrast to prior research, the lab’s novel contribution is that they actively incorporate the influence and causality of the attack, or ‘adversarial intervention’, into their learning algorithm rather than treat it as a random sampling process. Bogdan explained, “Its real power lies in its generality — it can work with any type of attack and network model.”

Due to the generality of their proposed framework, their research has far-reaching applications to any network reconstruction problem involving adversarial attack, in diverse fields such as ecology, social science, neuroscience, and network security. Their paper has also demonstrated its capability to determine the influence of trolls and bots on social media users.

Bogdan plans to extend their work by experimenting with a range of attack models, more complex and varied datasets, and larger network sizes to understand their effect on the reconstructed network.

Story Source:

Materials provided by University of Southern California. Note: Content may be edited for style and length.

Phase transitions: The math behind the music

Phase transitions: The math behind the music statistics, science, nevin manimala
Phase transitions: The math behind the music statistics, science, nevin manimala

Next time you listen to a favorite tune or wonder at the beauty of a natural sound, you might also end up pondering the math behind the music.

You will, anyway, if you spend any time talking with Jesse Berezovsky, an associate professor of physics at Case Western Reserve University. The longtime science researcher and a part-time viola player has become consumed with understanding and explaining the connective tissue between the two disciplines — more specifically, how the ordered structure of music emerges from the general chaos of sound.

“Why is music composed according to so many rules? Why do we organize sounds in this way to create music?” he asks on a short explainer video he recently made about his research. “To address that question, we can borrow methods from a related question:

‘How do atoms in a random gas or liquid come together to form a particular crystal?”

Phase transitions in physics, music

The answer in physics — and music, Berezovsky argues — is called “phase transitions” and comes about because of a balance between order and disorder, or entropy, he said.

“We can look at a balance — or a competition — between dissonance and entropy of sound — and see that phase transitions can also occur from disordered sound to the ordered structures of music,” he said.

Mixing math and music is not new. Mathematicians have long been fascinated with the structure of music. The American Mathematical Society, for example, devotes part of its web page (https://www.ams.org/publicoutreach/math-and-music) to exploring the idea (Pythagoras, anyone? “There is geometry in the humming of the strings, there is music in the spacing of the spheres.”)

But Berezovsky contends that much of the thinking, until now, has been a top-down approach, applying mathematical ideas to existing musical compositions as a way of understanding already existing music.

He contends he’s uncovering the “emergent structures of musical harmony” inherent in the art, just as order comes from disorder in the physical world. He believes that could mean a whole new way of looking at music of the past, present and future.

“I believe that this model could shed light on the very structures of harmony, particularly in Western music,” Berezovsky said. “But we can take it further: These ideas could provide a new lens for studying the entire system of tuning and harmony across cultures and across history — maybe even a road map for exploring new ideas in those areas.

“Or for any of us, maybe it’s just another way of just appreciating music — seeing the emergence of music the way we do the formation of snowflakes or gemstones.”

Emergent structures in music

Berezovsky said his theory is more than just an illustration of how we think about music. Instead, he says the mathematical structure is actually the fundamental underpinning of music itself, making the resultant octaves and other arrangements a foregone conclusion, not an arbitrary invention by humans.

His research, published May 17 in the journal Science Advances, “aims to explain why basic ordered patterns emerge in music, using the same statistical mechanics framework that describes emergent order across phase transitions in physical systems.”

In other words, the same universal principles that guide the arrangement of atoms when they organize into a crystal from a gas or liquid are also behind the fact that “phase transitions occur in this model from disordered sound to discrete sets of pitches, including the 12-fold octave division used in Western music.”

The theory also speaks to why we enjoy music — because it is caught in the tension between being too dissonant and too complex.

A single note played continuously would completely lack dissonance (low “energy”), but would be wholly uninteresting to the human ear, while an overly complex piece of music (high entropy) is generally not pleasing to the human ear. Most music — across time and cultures — exists in that tension between the two extremes, Berezovsky said.

Statistical model could predict future disease outbreaks

Several University of Georgia researchers teamed up to create a statistical method that may allow public health and infectious disease forecasters to better predict disease reemergence, especially for preventable childhood infections such as measles and pertussis.

As described in the journal PLOS Computational Biology, their five-year project resulted in a model that shows how subtle changes in the stream of reported cases of a disease may be predictive of both an approaching epidemic and of the final success of a disease eradication campaign.

“We hope that in the near future, we will be available to monitor and track warning signals for emerging diseases identified by this model,” said John Drake, Distinguished Research Professor of Ecology and director for the Center for the Ecology of Infectious Diseases who researches the dynamics of biological epidemics. His current projects include studies of Ebola virus in West Africa and Middle East respiratory syndrome-related coronavirus in the horn of Africa.

In recent years, the reemergence of measles, mumps, polio, whooping cough and other vaccine-preventable diseases has sparked a refocus on emergency preparedness.

“Research has been done in ecology and climate science about tipping points in climate change,” he said. “We realized this is mathematically similar to disease dynamics.”

Drake and colleagues focused on “critical slowing down,” or the loss of stability that occurs in a system as a tipping point is reached. This slowing down can result from pathogen evolution, changes in contact rates of infected individuals, and declines in vaccination. All these changes may affect the spread of a disease, but they often take place gradually and without much consequence until a tipping point is crossed.

Most data analysis methods are designed to characterize disease spread after the tipping point has already been crossed.

“We saw a need to improve the ways of measuring how well-controlled a disease is, which can be difficult to do in a very complex system, especially when we observe a small fraction of the true number of cases that occur,” said Eamon O’Dea, a postdoctoral researcher in Drake’s laboratory who focuses on disease ecology.

The research team found that their predictions were consistent with well-known findings of British epidemiologists Roy Anderson and Robert May, who compared the duration of epidemic cycles in measles, rubella, mumps, smallpox, chickenpox, scarlet fever, diphtheria and pertussis from the 1880s to 1980s. For instance, Anderson and May found that measles in England and Wales slowed down after extensive immunization in 1968. Similarly, the model shows that infectious diseases slow as an immunization threshold is approached. Slight variations in infection levels could be useful early warning signals for disease reemergence that results from a decline in vaccine uptake, they wrote.

“Our goal is to validate this on smaller scales so states and cities can potentially predict disease, which is practical in terms of how to make decisions about vaccines,” O’Dea said. “This could be particularly useful in countries where measles is still a high cause of mortality.”

To illustrate how the infectious disease model behaves, the team created a visualization that looks like a series of bowls with balls rolling in them. In the model, vaccine coverage affects the shallowness of the bowl and the speed of the ball rolling in it.

“Very often, the conceptual side of science is not emphasized as much as it should be, and we were pleased to find the right visuals to help others understand the science,” said Eric Marty, an ecology researcher who specializes in data visualization.

As part of Project AERO, which stands for Anticipating Emerging and Re-emerging Outbreaks, Drake and colleagues are creating interactive tools based on critical slowing down for researchers and policymakers to use in the field and guide decisions. For instance, the team is developing an interactive dashboard that will help non-scientists plot and analyze data to understand the current trends for a certain infectious disease. They’re presenting a prototype to fellow researchers now and anticipating a public release within the next year.

“If a computer model of a particular disease was sufficiently detailed and accurate, it would be possible to predict the course of an outbreak using simulation,” Marty said. “But if you don’t have a good model, as is often the case, then the statistics of critical slowing down might still give us early warning of an outbreak.”