Optimal models of thermodynamic properties

Argonne team combines cutting-edge modeling with 300-year-old statistical analysis technique to enhance material properties.

At some point in your life, you’ve probably had somebody — a parent, a teacher, a mentor — tell you that “the more you practice, the better you become.” The expression is often attributed to Thomas Bayes, an 18th century British minister who was interested in winning at games and formalized this simple observation into a now-famous mathematical expression.

Used to examine behaviors, properties and other mechanisms that constitute a concept or phenomenon, Bayesian analysis employs an array of varied, but similar, data to statistically inform an optimal model of that concept or phenomenon.

“Simply put, Bayesian statistics is a way of starting with our best current understanding and then updating that with new data from experiments or simulations to come up with a better-informed understanding,” said Noah Paulson, a computational materials scientist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

The method met with some success over the 300 years since its inception, but it is an idea whose time has finally arrived.

In some fields, like cosmology, researchers have been successfully developing and sharing Bayesian techniques and codes for some time. In others, like materials science, implementation of Bayesian analysis methods is just beginning to pay dividends.

“Simply put, Bayesian statistics is a way of defining something we already understand and then updating that with new data from experiments or simulations to come up with a more accurate understanding.” — Noah Paulson, computational materials scientist, Argonne National Laboratory

Paulson and several Argonne colleagues are applying Bayesian methods to quantify uncertainties in the thermodynamic properties of materials. In other words, they want to determine how much confidence they can place in the data they collect about materials and the mathematical models used to represent those data.

While the statistical techniques are applicable to many fields, the researchers set out to create an optimal model of the thermodynamic properties of hafnium (Hf), a metal emerging as a key component in computer electronics. Results derived from this approach will be published published in the September 2019 issue of the International Journal of Engineering Science.

“We found that we didn’t know all that we could about this material because there were so many datasets and so much conflicting information. So we performed this Bayesian analysis to propose a model that the community can embrace and use in research and application,” said Marius Stan, who leads intelligent materials design in Argonne’s Applied Materials division (AMD) and is a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

To derive an optimal model of a material’s thermodynamic properties, researchers use some prior knowledge or data related to the subject matter as a starting point.

In this case, the team was looking to define the best models for the enthalpy (the amount of energy in a material) and the specific heat (the heat necessary to increase the temperature of the unit mass of the material by one degree Celsius) of hafnium. Represented as equations and mathematical expressions, the models have different parameters that control them. The goal is to find the optimal parameters.

“We had to start with a guess of what those parameters should be,” said Paulson of AMD’s Thermal and Structural Materials group. “Looking through the literature we found some ranges and values that made sense, so we used those for our prior distribution.”

One of the parameters the researchers explored is the temperature of a crystal’s highest normal mode of vibration. Referred to as the Einstein or Debye temperature, this parameter affects a material’s specific heat.

The prior — or initial — guess is based on existing models, preliminary data or the intuition of experts in the field. Using calibration data from experiments or simulation, Bayesian statistics update that prior knowledge and determine the posterior — the updated understanding of the model. The Bayesian framework can then determine whether new data are in better or worse agreement with the model being tested.

“Like cosmology, materials science must find the optimal model and parameter values that best explain the data and then determine the uncertainties related to these parameters. There’s not much point in having a best-fit parameter value without an error bar,” said team member Elise Jennings, a computational scientist in statistics with the Argonne Leadership Computing Facility (ALCF), a DOEOffice of Science User Facility, and an associate of the Kavli Institute for Cosmological Physics at the University of Chicago.

And that, she said, is the biggest challenge for materials science: a lack of error bars or uncertainties noted in available datasets. The hafnium research, for example, relied on datasets selected from previously published papers, but error ranges were either absent or excluded.

So, in addition to presenting models for the specific thermodynamic properties of hafnium, the article also explores techniques by which materials science and other fields of study can make allowances for datasets that don’t have uncertainties.

“For a scientist or an engineer, this is an important problem,” said Stan. “We’re presenting a better way of evaluating how valuable our information is. We want to know how much trust we can put in the models and the data. And this work reveals a methodology, a better way of evaluating that.”

A paper based on the study, “Bayesian strategies for uncertainty quantification of the thermodynamic properties of materials,” is available online (June 13) and will appear in the September 2019 edition of the International Journal of Engineering Science. Noah Paulson, Elise Jennings and Marius Stan collaborated on the research.

This study is supported through the CHiMaD Program, funded by the National Institute for Standards and Technology (NIST).

Understanding brain activity when you name what you see

You see an object, you think of its name and then you say it. This apparently simple activity engages a set of brain regions that must interact with each other to produce the behavior quickly and accurately. A report published in eNeuro shows that a reliable sequence of neural interactions occurs in the human brain that corresponds to the visual processing stage, the language state when we think of the name, and finally the articulation state when we say the name. The study reveals that the neural processing does not involve just a sequence of different brain regions, but instead it engages a sequence of changing interactions between those brain regions.

“In this study, we worked with patients with epilepsy whose brain activity was being recorded with electrodes to find where their seizures started. While the electrodes were in place, we showed the patients pictures and asked them to name them while we recorded their brain activity,” said co-corresponding author Dr. Xaq Pitkow, assistant professor of neuroscience and McNair Scholar at Baylor College of Medicine and assistant professor of electrical and computer engineering at Rice University.

“We then analyzed the data we recorded and derived a new level of understanding of how the brain network comes up with the right word and enables us to say that word,” said Dr. Nitin Tandon, professor in the Vivian L. Smith Department of Neurosurgery at McGovern Medical School at The University of Texas Health Science Center at Houston.

The researchers’ findings support the view that when a person names a picture, the different behavioral stages — looking at the image, thinking of the name and saying it — consistently correspond to dynamic interactions within neural networks.

“Before our findings, the typical view was that separate brain areas would be activated in sequence,” Pitkow said. “But we used more complex statistical methods and fast measurement methods, and found more interesting brain dynamics.”

“This methodological advance provides a template by which to assess other complex neural processes, as well as to explain disorders of language production,” Tandon said.

Story Source:

Materials provided by Baylor College of Medicine. Note: Content may be edited for style and length.

Decoding Beethoven’s music style using data science

Decoding Beethoven's music style using data science statistics, science, nevin manimala
Decoding Beethoven's music style using data science statistics, science, nevin manimala

EPFL researchers are investigating Beethoven’s composition style and they are using statistical techniques to quantify and explore the patterns that characterize musical structures in the Western classical tradition. They confirm what is expected against the backdrop of music theory for the classical music era, but go beyond a music theoretical approach by statistically characterizing the musical language of Beethoven for the very first time. Their study is based on the set of compositions known as the Beethoven String Quartets and the results are published in PLOS ONE on June 6th, 2019.

“New state-of-the-art methods in statistics and data science make it possible for us to analyze music in ways that were out of reach for traditional musicology. The young field of Digital Musicology is currently advancing a whole new range of methods and perspectives,” says Martin Rohrmeier who leads EPFL’s Digital and Cognitive Musicology Lab (DCML) in the College of Humanities’ Digital Humanities Institute. “The aim of our lab is to understand how music works.”

The Beethoven String Quartets refer to 16 quartets encompassing 70 single movements that Beethoven composed throughout his lifetime. He completed his first String Quartet composition at the turn of the 19th century when he was almost 30 years old, and the last in 1826 shortly before his death. A string quartet is a musical ensemble of four musicians playing string instruments: two violins, the viola, and the cello.

From music analysis to big data

For the study Rohrmeier and colleagues plowed through the scores of all 16 of Beethoven’s String Quartets in digital and annotated form. The most time-consuming part of the work has been to generate the dataset based on ten thousands of annotations by music theoretical experts.

“We essentially generated a large digital resource from Beethoven’s music scores to look for patterns,” says Fabian C. Moss, first author of the PLOS ONE study.

When played, the String Quartets represent over 8 hours of music. The scores themselves contain almost 30,000 chord annotations. A chord is a set of notes that sound at the same time, and a note corresponds to a pitch.

In music analysis, chords can be classified according to the role they play in the musical piece. Two well-known types of chords are called the dominant and the tonic, which have central roles for the build-up of tension and release and for establishing musical phrases. But there is a large number of types of chords, including many variants of the dominant and tonic chords. The Beethoven String Quartets contain over 1000 different types of these chords.

“Our approach exemplifies the growing research field of digital humanities, in which data science methods and digital technologies are used to advance our understanding of real-world sources, such as literary texts, music or paintings, under new digital perspectives,” explains co-author Markus Neuwirth.

Beethoven’s statistical signature

Beethoven’s creative choices are now apparent through the filter of statistical analysis, thanks to this new data set generated by the researchers.

The study finds that very few chords govern most of the music, a phenomenon that is also known in linguistics, where very few words dominate language corpora. As expected from music theory on music from the classical period, the study shows that the compositions are particularly dominated by the dominant and tonic chords and their many variants. Also, the most frequent transition from one chord to the next happens from the dominant to the tonic. The researchers also found that chords strongly select for their order and, thus, define the direction of musical time. But the statistical methodology reveals more. It characterizes Beethoven’s specific composition style for the String Quartets, through a distribution of all the chords he used, how often they occur, and how they commonly transition from one to the other. In other words, it captures Beethoven’s composition style with a statistical signature.

“This is just the beginning,” explains Moss. “We are continuing our work by extending the datasets to cover a broad range of composers and historical periods, and invite other researchers to join our search for the statistical basis of the inner workings of music.”

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Hillary Sanctuary. Note: Content may be edited for style and length.

From viruses to social bots, researchers unearth the structure of attacked networks

The human body’s mechanisms are marvelous, yet they haven’t given up all their secrets. In order to truly conquer human disease, it is crucial to understand what happens at the most elementary level.

Essential functions of the cell are carried out by protein molecules, which interact with each other in varying complexity. When a virus enters the body, it disrupts their interactions and manipulates them for its own replication. This is the foundation of genetic diseases, and it is of great interest to understand how viruses operate.

Adversaries like viruses inspired Paul Bogdan, associate professor in the Ming Hsieh Department of Electrical and Computer Engineering, and recent Ph.D. graduate, Yuankun Xue, from USC’s Cyber Physical Systems Group, to determine how exactly they interact with proteins in the human body. “We tried to reproduce this problem using a mathematical model,” said Bogdan. Their groundbreaking statistical machine learning research on “Reconstructing missing complex networks against adversarial interventions,” was published in Nature Communications journal earlier this April.

Xue, who earned his Ph.D. in electrical and computer engineering last year with the 2018 Best Dissertation Award, said: “Understanding the invisible networks of critical proteins and genes is challenging, and extremely important to design new medicines or gene therapies against viruses and even diseases like cancer.”

The ‘protein interaction network’ models each protein as a ‘node.’ If two proteins interact, there is an ‘edge’ connecting them. Xue explained, “An attack by a virus is analogous to removing certain nodes and links in this network.” Consequently, the original network is no longer observable.

“Some networks are highly dynamic. The speed at which they change may be extremely fast or slow,” Bogdan said. “We may not have sensors to get accurate measurements. Part of the network cannot be observed and hence becomes invisible.”

To trace the effect of a viral attack, Bogdan and Xue needed to reconstruct the original network by finding a reliable estimate of the invisible part, which was no easy task. Said Bogdan: “The challenge is that you don’t see the links, you don’t see the nodes, and you don’t know the behavior of the virus.” To solve this problem, Xue added, “The trick is to rely on a statistical machine learning framework to trace all possibilities and find the most probable estimate.”

In sharp contrast to prior research, the lab’s novel contribution is that they actively incorporate the influence and causality of the attack, or ‘adversarial intervention’, into their learning algorithm rather than treat it as a random sampling process. Bogdan explained, “Its real power lies in its generality — it can work with any type of attack and network model.”

Due to the generality of their proposed framework, their research has far-reaching applications to any network reconstruction problem involving adversarial attack, in diverse fields such as ecology, social science, neuroscience, and network security. Their paper has also demonstrated its capability to determine the influence of trolls and bots on social media users.

Bogdan plans to extend their work by experimenting with a range of attack models, more complex and varied datasets, and larger network sizes to understand their effect on the reconstructed network.

Story Source:

Materials provided by University of Southern California. Note: Content may be edited for style and length.

Phase transitions: The math behind the music

Phase transitions: The math behind the music statistics, science, nevin manimala
Phase transitions: The math behind the music statistics, science, nevin manimala

Next time you listen to a favorite tune or wonder at the beauty of a natural sound, you might also end up pondering the math behind the music.

You will, anyway, if you spend any time talking with Jesse Berezovsky, an associate professor of physics at Case Western Reserve University. The longtime science researcher and a part-time viola player has become consumed with understanding and explaining the connective tissue between the two disciplines — more specifically, how the ordered structure of music emerges from the general chaos of sound.

“Why is music composed according to so many rules? Why do we organize sounds in this way to create music?” he asks on a short explainer video he recently made about his research. “To address that question, we can borrow methods from a related question:

‘How do atoms in a random gas or liquid come together to form a particular crystal?”

Phase transitions in physics, music

The answer in physics — and music, Berezovsky argues — is called “phase transitions” and comes about because of a balance between order and disorder, or entropy, he said.

“We can look at a balance — or a competition — between dissonance and entropy of sound — and see that phase transitions can also occur from disordered sound to the ordered structures of music,” he said.

Mixing math and music is not new. Mathematicians have long been fascinated with the structure of music. The American Mathematical Society, for example, devotes part of its web page (https://www.ams.org/publicoutreach/math-and-music) to exploring the idea (Pythagoras, anyone? “There is geometry in the humming of the strings, there is music in the spacing of the spheres.”)

But Berezovsky contends that much of the thinking, until now, has been a top-down approach, applying mathematical ideas to existing musical compositions as a way of understanding already existing music.

He contends he’s uncovering the “emergent structures of musical harmony” inherent in the art, just as order comes from disorder in the physical world. He believes that could mean a whole new way of looking at music of the past, present and future.

“I believe that this model could shed light on the very structures of harmony, particularly in Western music,” Berezovsky said. “But we can take it further: These ideas could provide a new lens for studying the entire system of tuning and harmony across cultures and across history — maybe even a road map for exploring new ideas in those areas.

“Or for any of us, maybe it’s just another way of just appreciating music — seeing the emergence of music the way we do the formation of snowflakes or gemstones.”

Emergent structures in music

Berezovsky said his theory is more than just an illustration of how we think about music. Instead, he says the mathematical structure is actually the fundamental underpinning of music itself, making the resultant octaves and other arrangements a foregone conclusion, not an arbitrary invention by humans.

His research, published May 17 in the journal Science Advances, “aims to explain why basic ordered patterns emerge in music, using the same statistical mechanics framework that describes emergent order across phase transitions in physical systems.”

In other words, the same universal principles that guide the arrangement of atoms when they organize into a crystal from a gas or liquid are also behind the fact that “phase transitions occur in this model from disordered sound to discrete sets of pitches, including the 12-fold octave division used in Western music.”

The theory also speaks to why we enjoy music — because it is caught in the tension between being too dissonant and too complex.

A single note played continuously would completely lack dissonance (low “energy”), but would be wholly uninteresting to the human ear, while an overly complex piece of music (high entropy) is generally not pleasing to the human ear. Most music — across time and cultures — exists in that tension between the two extremes, Berezovsky said.

Statistical model could predict future disease outbreaks

Several University of Georgia researchers teamed up to create a statistical method that may allow public health and infectious disease forecasters to better predict disease reemergence, especially for preventable childhood infections such as measles and pertussis.

As described in the journal PLOS Computational Biology, their five-year project resulted in a model that shows how subtle changes in the stream of reported cases of a disease may be predictive of both an approaching epidemic and of the final success of a disease eradication campaign.

“We hope that in the near future, we will be available to monitor and track warning signals for emerging diseases identified by this model,” said John Drake, Distinguished Research Professor of Ecology and director for the Center for the Ecology of Infectious Diseases who researches the dynamics of biological epidemics. His current projects include studies of Ebola virus in West Africa and Middle East respiratory syndrome-related coronavirus in the horn of Africa.

In recent years, the reemergence of measles, mumps, polio, whooping cough and other vaccine-preventable diseases has sparked a refocus on emergency preparedness.

“Research has been done in ecology and climate science about tipping points in climate change,” he said. “We realized this is mathematically similar to disease dynamics.”

Drake and colleagues focused on “critical slowing down,” or the loss of stability that occurs in a system as a tipping point is reached. This slowing down can result from pathogen evolution, changes in contact rates of infected individuals, and declines in vaccination. All these changes may affect the spread of a disease, but they often take place gradually and without much consequence until a tipping point is crossed.

Most data analysis methods are designed to characterize disease spread after the tipping point has already been crossed.

“We saw a need to improve the ways of measuring how well-controlled a disease is, which can be difficult to do in a very complex system, especially when we observe a small fraction of the true number of cases that occur,” said Eamon O’Dea, a postdoctoral researcher in Drake’s laboratory who focuses on disease ecology.

The research team found that their predictions were consistent with well-known findings of British epidemiologists Roy Anderson and Robert May, who compared the duration of epidemic cycles in measles, rubella, mumps, smallpox, chickenpox, scarlet fever, diphtheria and pertussis from the 1880s to 1980s. For instance, Anderson and May found that measles in England and Wales slowed down after extensive immunization in 1968. Similarly, the model shows that infectious diseases slow as an immunization threshold is approached. Slight variations in infection levels could be useful early warning signals for disease reemergence that results from a decline in vaccine uptake, they wrote.

“Our goal is to validate this on smaller scales so states and cities can potentially predict disease, which is practical in terms of how to make decisions about vaccines,” O’Dea said. “This could be particularly useful in countries where measles is still a high cause of mortality.”

To illustrate how the infectious disease model behaves, the team created a visualization that looks like a series of bowls with balls rolling in them. In the model, vaccine coverage affects the shallowness of the bowl and the speed of the ball rolling in it.

“Very often, the conceptual side of science is not emphasized as much as it should be, and we were pleased to find the right visuals to help others understand the science,” said Eric Marty, an ecology researcher who specializes in data visualization.

As part of Project AERO, which stands for Anticipating Emerging and Re-emerging Outbreaks, Drake and colleagues are creating interactive tools based on critical slowing down for researchers and policymakers to use in the field and guide decisions. For instance, the team is developing an interactive dashboard that will help non-scientists plot and analyze data to understand the current trends for a certain infectious disease. They’re presenting a prototype to fellow researchers now and anticipating a public release within the next year.

“If a computer model of a particular disease was sufficiently detailed and accurate, it would be possible to predict the course of an outbreak using simulation,” Marty said. “But if you don’t have a good model, as is often the case, then the statistics of critical slowing down might still give us early warning of an outbreak.”

Coincidence helps with quantum measurements

Coincidence helps with quantum measurements statistics, science, nevin manimala
Coincidence helps with quantum measurements statistics, science, nevin manimala

Quantum phenomena are experimentally difficult to deal with, the effort increases dramatically with the size of the system. For some years now, scientists are capable of controlling small quantum systems and investigating quantum properties. Such quantum simulations are considered promising early applications of quantum technologies that could solve problems where simulations on conventional computers fail. However, the quantum systems used as quantum simulators must continue to grow. The entanglement of many particles is still a phenomenon that is difficult to understand. “In order to operate a quantum simulator consisting of ten or more particles in the laboratory, we must characterize the states of the system as accurately as possible,” explains Christian Roos from the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences.

So far, quantum state tomography has been used for the characterization of quantum states, with which the system can be completely described. This method, however, involves a very high measuring and computing effort and is not suitable for systems with more than half a dozen particles. Two years ago, the researchers led by Christian Roos, together with colleagues from Germany and Great Britain, presented a very efficient method for the characterization of complex quantum states. However, only weakly entangled states can be described with this method. This issue is now circumvented by a new method presented last year by the theorists led by Peter Zoller, which can be used to characterize any entangled state. Together with experimental physicists Rainer Blatt and Christian Roos and their team, they have now demonstrated this method in the laboratory.

Quantum simulations on larger systems

“The new method is based on the repeated measurement of randomly selected transformations of individual particles. The statistical evaluation of the measurement results then provides information about the degree of entanglement of the system,” explains Andreas Elben from Peter Zoller’s team. The Austrian physicists demonstrated the process in a quantum simulator consisting of several ions arranged in a row in a vacuum chamber. Starting from a simple state, the researchers let the individual particles interact with the help of laser pulses and thus generate entanglement in the system. “We perform 500 local transformations on each ion and repeat the measurements a total of 150 times in order to then be able to use statistical methods to determine information about the entanglement state from the measurement results,” explains PhD student Tiff Brydges from the Institute of Quantum Optics and Quantum Information.

In the work now published in Science, the Innsbruck physicists characterize the dynamic development of a system consisting of ten ions as well as a subsystem consisting of ten ions of a 20-ion chain. “In the laboratory, this new method helps us a lot because it enables us to understand our quantum simulator even better and, for example, to assess the purity of the entanglement more precisely,” says Christian Roos, who assumes that the new method can be successfully applied to quantum systems with up to several dozen particles.

The scientific work was published in the journal Science and financially supported by the European Research Council ERC and the Austrian Science Fund FWF. “This publication shows once again the fruitful cooperation between the theoretical physicists and the experimental physicists here in Innsbruck,” emphasizes Peter Zoller. “At the University of Innsbruck and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, young researchers from both fields find very good conditions for research work that is competitive worldwide.”

Story Source:

Materials provided by University of Innsbruck. Note: Content may be edited for style and length.

Study finds natural variation in sex ratios at birth and sex ratio inflation in 12 countries

An international team of researchers, led by UMass Amherst biostatistician Leontine Alkema and her former Ph.D. student Fengqing Chao, developed a new estimation method for assessing natural variations in the sex ratio at birth (SRB) for all countries in the world.

In the study published Monday, April 15 in Proceedings of the National Academy of Sciences (PNAS), the researchers found natural variation in regional baseline SRBs that differ from the previously held standard baseline male-to-female ratio of 1.05 for most regions.

They also identified 12 countries with strong evidence of sex ratio at birth imbalances, or sex ratio inflation, due to sex-selective abortions and a preference for sons.

“Given that sex ratios at birth are still inflated in some countries and could increase in the future in other countries, the monitoring of the sex ratio at birth and how it compares with expected baseline levels is incredibly important to inform policy and programs when sex ratio inflations are detected,” says Alkema, associate professor in the School of Public Health and Health Sciences.

Alkema adds, “While prior studies have shown differences in the sex ratio at birth, for example based on ethnicity in population subgroups, there is no prior study that we know of that has assessed regional baseline levels of the SRB. When we did the estimation exercise, after excluding data that may have been affected by masculinization of the sex ratio at birth, we found that regional levels differed from the commonly assumed 1.05 in several regions.”

Estimated regional reference levels ranged from 1.031 in sub-Saharan Africa to 1.063 in southeastern Asia and eastern Asia, and 1.067 in Oceania.

Alkema regularly collaborates with United Nations agencies to develop statistical models to assess and interpret demographic and population-level health trends globally. For this study, along with Alkema at UMass Amherst, researchers at the National University of Singapore and the United Nations Population Division compiled a database from vital registries, censuses and surveys with 10,835 observations — 16,602 country-years of information from 202 countries. They developed Bayesian statistical methods to estimate the sex ratio at birth for all countries from 1950 to 2017.

“We found that the SRB imbalance in 12 countries since 1970 is associated with 23.1 million fewer female births than expected,” Alkema says.

The majority of the missing female births were in China, with 11.9 million, and India, with 10.6 million. The other countries identified with SRB imbalance were Albania, Armenia, Azerbaijan, Georgia, Hong Kong, Republic of Korea, Montenegro, Taiwan, Tunisia and Vietnam.

Story Source:

Materials provided by University of Massachusetts at Amherst. Note: Content may be edited for style and length.

Scientists build a machine to see all possible futures

Scientists build a machine to see all possible futures statistics, science, nevin manimala
Scientists build a machine to see all possible futures statistics, science, nevin manimala

In the 2018 movie Infinity War, a scene featured Dr. Strange looking into 14 million possible futures to search for a single timeline where the heroes would be victorious. Perhaps he would have had an easier time with help from a quantum computer. A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) and Griffith University in Australia has constructed a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.

“When we think about the future, we are confronted by a vast array of possibilities,” explains Assistant Professor Mile Gu of NTU Singapore, who led development of the quantum algorithm that underpins the prototype “These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.” What he and his research group realised, however, was that a quantum computer can examine all possible futures by placing them in a quantum superposition — similar to Schrödinger’s famous cat that is simultaneously alive and dead.

To realise this scheme, they joined forces with the experimental group led by Professor Geoff Pryde at Griffith University. Together, the team implemented a specially devised photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons — quantum particles of light. They then demonstrated that the state of the quantum device was a superposition of multiple potential futures, weighted by their probability of occurrence.

“The functioning of this device is inspired by the Nobel Laureate Richard Feynman,” says Dr Jayne Thompson, a member of the Singapore team. “When Feynman started studying quantum physics, he realized that when a particle travels from point A to point B, it does not necessarily follow a single path. Instead, it simultaneously transverses all possible paths connecting the points. Our work extends this phenomenon and harnesses it for modelling statistical futures.”

The machine has already demonstrated one application — measuring how much our bias towards a specific choice in the present impacts the future. “Our approach is to synthesise a quantum superposition of all possible futures for each bias.” explains Farzad Ghafari, a member of the experimental team, “By interfering these superpositions with each other, we can completely avoid looking at each possible future individually. In fact, many current artificial intelligence (AI) algorithms learn by seeing how small changes in their behaviour can lead to different future outcomes, so our techniques may enable quantum enhanced AIs to learn the effect of their actions much more efficiently.”

The team notes while their present prototype simulates at most 16 futures simultaneously, the underlying quantum algorithm can in principle scale without bound. “This is what makes the field so exciting,” says Pryde. “It is very much reminiscent of classical computers in the 1960s. Just as few could imagine the many uses of classical computers in the 1960s, we are still very much in the dark about what quantum computers can do. Each discovery of a new application provides further impetus for their technological development.”

Story Source:

Materials provided by Nanyang Technological University, College of Science. Note: Content may be edited for style and length.

The cost of computation

For decades, physicists have wrestled with understanding the thermodynamic cost of manipulating information, what we would now call computing. How much energy does it take, for example, to erase a single bit from a computer? What about more complicated operations? These are pressing, practical questions, as artificial computers are energy hogs, claiming an estimated four percent of total energy consumed in the United States.

These questions are not limited to the digital machines constructed by us. The human brain can be seen as a computer — one that gobbles an estimated 10 to 20 percent of all the calories a person consumes. Living cells, too, can be viewed as computers, but computers that “are many orders of magnitude more efficient” than any laptop or smartphone humans have constructed, says David Wolpert of the Santa Fe Institute.

Wolpert, a mathematician, physicist, and computer scientist, has been on the frontlines of a rapid resurgence of interest in a deep understanding of the energy cost of computing. That research is now hitting its stride, thanks to advances in using some revolutionary tools recently developed in statistical physics, in order to understand the thermodynamic behavior of nonequilibrium systems. The reason these tools are so important is that computers are decidedly nonequilibrium systems. (Unplug your laptop and wait for it to reach equilibrium, and then see if it still works.) Although Wolpert primarily approaches these issues using tools from computer science and physics, there is also sharp interest from researchers in other areas, including those who study chemical reactions, cellular biology, and neurobiology.

However, research in nonequilibrium statistical physics largely happens in silos, says Wolpert. In a review published today in the Journal of Physics A, Wolpert collects recent advances in understanding the thermodynamics of computation that are grounded in computer science and physics. The review functions as a sort of state-of-the-science report for a burgeoning interdisciplinary investigation.

“It is basically a snapshot of the current state of the fields, where these ideas are starting to explode, in all directions,” says Wolpert.

In the paper, Wolpert first summarizes the relevant theoretical ideas from physics and computer science. He then discusses what’s known about the entropic cost of a range of computations, from erasing a single bit to running a Turing machine. He goes on to show how breakthroughs in nonequilibrium statistical physics have enabled researchers to more formally probe those cases — moving far beyond simple bit erasure.

Wolpert also touches on the questions raised in this recent research which suggest real-world challenges, like how to design algorithms with energy conservation in mind. Can biological systems, for example, serve as inspiration for designing computers with minimal thermodynamic cost?

“We are being surprised and astonished in many ways,” Wolpert says. In putting together the review, and coediting a book on the topic due out later this year, “we’ve uncovered phenomena that no one has analyzed before that were very natural to us, as we pursue this modern version of the thermodynamics of computation.”

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.