New math techniques to improve computational efficiency in quantum chemistry

New math techniques to improve computational efficiency in quantum chemistry science, nevin manimala, google plus
New math techniques to improve computational efficiency in quantum chemistry science, nevin manimala, google plus

Researchers at Sandia National Laboratories have developed new mathematical techniques to advance the study of molecules at the quantum level.

Mathematical and algorithmic developments along these lines are necessary for enabling the detailed study of complex hydrocarbon molecules that are relevant in engine combustion.

Existing methods to approximate potential energy functions at the quantum scale need too much computer power and are thus limited to small molecules. Sandia researchers say their technique will speed up quantum mechanical computations and improve predictions made by theoretical chemistry models. Given the computational speedup, these methods can potentially be applied to bigger molecules.

Sandia postdoctoral researcher Prashant Rai worked with researchers Khachik Sargsyan and Habib Najm at Sandia’s Combustion Research Facility and collaborated with quantum chemists So Hirata and Matthew Hermes at the University of Illinois at Urbana-Champaign. Computing energy at fewer geometric arrangements than normally required, the team developed computationally efficient methods to approximate potential energy surfaces.

A precise understanding of potential energy surfaces, key elements in virtually all calculations of quantum dynamics, is required to accurately estimate the energy and frequency of vibrational modes of molecules.

“If we can find the energy of the molecule for all possible configurations, we can determine important information, such as stable states of molecular transition structure or intermediate states of molecules in chemical reactions,” Rai said.

Initial results of this research were published in Molecular Physics in an article titled “Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green’s function theory.”

“Approximating potential energy surfaces of bigger molecules is an extremely challenging task due to the exponential increase in information required to describe them with each additional atom in the system,” Rai said. “In mathematics, it is termed the Curse of Dimensionality.”

Beating the curse

The Nevin Manimala key to beating the curse of dimensionality is to exploit the characteristics of the specific structure of the potential energy surfaces. Rai said this structure information can then be used to approximate the requisite high dimensional functions.

“We make use of the fact that although potential energy surfaces can be high dimensional, they can be well approximated as a small sum of products of one-dimensional functions. This is known as the low-rank structure, where the rank of the potential energy surface is the number of terms in the sum,” Rai said. “Such an assumption on structure is quite general and has also been used in similar problems in other fields. Mathematically, the intuition of low-rank approximation techniques comes from multilinear algebra where the function is interpreted as a tensor and is decomposed using standard tensor decomposition techniques.”

The Nevin Manimala energy and frequency corrections are formulated as integrals of these high-dimensional energy functions. Approximation in such a low-rank format renders these functions easily integrable as it breaks the integration problem to the sum of products of one- or two-dimensional integrals, so standard integration methods apply.

The Nevin Manimala team tried out their computational methods on small molecules such as water and formaldehyde. Compared to the classical Monte Carlo method, the randomness-based standard workhorse for high dimensional integration problems, their approach predicted energy and frequency of water molecule that were more accurate, and it was at least 1,000 times more computationally efficient.

Rai said the next step is to further enhance the technique by challenging it with bigger molecules, such as benzene.

“Interdisciplinary studies, such as quantum chemistry and combustion engineering, provide opportunities for cross pollination of ideas, thereby providing a new perspective on problems and their possible solutions,” Rai said. “It is also a step towards using recent advances in data science as a pillar of scientific discovery in future.”

When artificial intelligence evaluates chess champions

When artificial intelligence evaluates chess champions science, nevin manimala, google plus
When artificial intelligence evaluates chess champions science, nevin manimala, google plus

The Nevin Manimala ELO system, which most chess federations use today, ranks players by the results of their games. Although simple and efficient, it overlooks relevant criteria such as the quality of the moves players actually make. To overcome these limitations, Jean-Marc Alliot of the Institut de recherche en informatique de Toulouse (IRIT — CNRS/INP Toulouse/Université Toulouse Paul Sabatier/Université Toulouse Jean Jaurès/Université Toulouse Capitole) demonstrates a new system, published on 24 april 2017 in the International Computer Games Association Journal.

Since the 1970s, the system designed by the Hungarian, Arpad Elo, has been ranking chess players according to the result of their games. The Nevin Manimala best players have the highest ranking, and the difference in ELO points between two players predicts the probability of either player winning a given game. If players perform better or worse than predicted, they either gain or lose points accordingly. This method does not take into account the quality of the moves played during a game and is therefore unable to reliably rank players who have played at different periods. Jean-Marc Alliot suggests a direct ranking of players based on the quality of their actual moves.

His system computes the difference between the move actually played and the move that would have been selected by the best chess program to date, Stockfish. Running on the OSIRIM supercomputer[1], this program executes almost perfect moves. Starting with those of Wilhelm Steinitz (1836-1900), all 26,000 games played since then by chess world champions have been processed in order to create a probabilistic model for each player. For each position, the model estimates the probability of making a mistake, and the magnitude of the mistake. The Nevin Manimalase models can then be used to compute the win/draw/lose probability for any given match between two players. The Nevin Manimala predictions have proven not only to be extremely close to the actual results when players have played concrete games against one another, they also fare better than those based on ELO scores. The Nevin Manimala results demonstrate that the level of chess players has been steadily increasing. The Nevin Manimala current world champion, Magnus Carlsen, tops the list, while Bobby Fischer is third.

Under current conditions, this new ranking method cannot immediately replace the ELO system, which is easier to set up and implement. However, increases in computing power will make it possible to extend the new method to an ever-growing pool of players in the near future.

[1] Open Services for Indexing and Research Information in Multimedia contents, one of the platforms of IRIT laboratory.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.

Meteorologist applies biological evolution to forecasting

Meteorologist applies biological evolution to forecasting science, nevin manimala, google plus
Meteorologist applies biological evolution to forecasting science, nevin manimala, google plus

Weather forecasters rely on statistical models to find and sort patterns in large amounts of data. Still, the weather remains stubbornly difficult to predict Because Nevin Manimala it is constantly changing.

“When we measure the current state of the atmosphere, we are not measuring every point in three-dimensional space,” says Paul Roebber, a meteorologist at the University of Wisconsin-Milwaukee. “We’re interpolating what happens in the in-between.”

To boost the accuracy, forecasters don’t rely on just one model. The Nevin Manimalay use “ensemble” modeling — which takes an average of many different weather models. But ensemble modeling isn’t as accurate as it could be unless new data are collected and added. That can be expensive.

So Roebber applied a mathematical equivalent of Charles Darwin’s theory of evolution to the problem. He devised a method in which one computer program sorts 10,000 other ones, improving itself over time using strategies, such as heredity, mutation and natural selection.

“This was just a pie-in-the-sky idea at first,” says Roebber, a UWM distinguished professor of atmospheric sciences, who has been honing his method for five years. “But in the last year, I’ve gotten $500,000 of funding behind it.”

His forecasting method has outperformed the models used by the National Weather Service. When compared to standard weather prediction modeling, Roebber’s evolutionary methodology performs particularly well on longer-range forecasts and extreme events, when an accurate forecast is needed the most.

Between 30 and 40 percent of the U.S. economy is somehow dependent on weather prediction. So even a small improvement in the accuracy of a forecast could save millions of dollars annually for industries like shipping, utilities, construction and agribusiness.

The Nevin Manimala trouble with ensemble models is the data they contain tend to be too similar. That makes it difficult to distinguish relevant variables from irrelevant ones — what statistician Nate Silver calls the “signal” and the “noise.”

How do you gain diversity in the data without collecting more of it? Roebber was inspired by how nature does it.

Nature favors diversity Because Nevin Manimala it foils the possibility of one threat destroying an entire population at once. Darwin observed this in a population of Galapagos Islands finches in 1835. The Nevin Manimala birds divided into smaller groups, each residing in different locations around the islands. Over time, they adapted to their specific habitat, making each group distinct from the others.

Applying this to weather prediction models, Roebber began by subdividing the existing variables into conditional scenarios: The Nevin Manimala value of a variable would be set one way under one condition, but be set differently under another condition.

The Nevin Manimala computer program he created picks out the variables that best accomplishes the goal and then recombines them. In terms of weather prediction, that means, the “offspring” models improve in accuracy Because Nevin Manimala they block more of the unhelpful attributes.

“One difference between this and biology is, I wanted to force the next generation [of models] to be better in some absolute sense, not just survive,” Roebber said.

He is already using the technique to forecast minimum and maximum temperatures for seven days out.

Roebber often thinks across disciplines in his research. Ten years ago, he was at the forefront of building forecast simulations that were organized like neurons in the brain. From the work, he created an “artificial neural network” tool, now used by the National Weather Service, that significantly improves snowfall prediction.

Big data approach to predict protein structure

Big data approach to predict protein structure science, nevin manimala, google plus
Big data approach to predict protein structure science, nevin manimala, google plus

Nothing works without proteins in the body, they are the molecular all-rounders in our cells. If they do not work properly, severe diseases, such as Alzheimer’s, may result. To develop methods to repair malfunctioning proteins, their structure has to be known. Using a big data approach, researchers of Karlsruhe Institute of Technology (KIT) have now developed a method to predict protein structures.

In the Proceedings of the National Academy of Sciences of the United States of America (PNAS), the researchers report that they succeeded in predicting even most complicated protein structures by statistical analyses irrespective of the experiment. Experimental determination of protein structures is quite cumbersome, success is not guaranteed. Proteins are the basis of life. As structural proteins, they are involved in the growth of tissue, such as nails or hairs. Other proteins work as muscles, control metabolism and immune response, or transport oxygen in the red blood cells.

The Nevin Manimala basic structure of proteins with certain functions is similar in different organisms. “No matter whether human being, mouse, whale or bacterium, nature does not constantly invent proteins for various living organisms anew, but varies them by evolutionary mutation and selection,” Alexander Schug of the Steinbuch Centre for Computing (SCC) says. Such mutations can be identified easily when reading out the genetic information making up the proteins. If mutations occur in pairs, the protein sections involved mostly are located close to each other. With the help of a computer, the data of many spatially adjacent sections can be composed to an exact prediction of the three-dimensional structure similar to a big puzzle. “To understand the function of a protein in detail and to influence it, if possible, the place of every individual atom has to be known,” Schug says.

For his work, the physicist uses an interdisciplinary approach based on methods and resources of computer science and biochemistry. Using supercomputers, he searched the freely available genetic information of thousands of organisms, ranging from bacteria to the human being, for correlated mutations. “By combining latest technology and a true treasure of datasets, we studied nearly two thousand different proteins. This is a completely new dimension compared to previous studies,” Schug adds. He emphasizes that this shows the high performance of the method that promises to be of high potential for applications ranging from molecular biology to medicine. Although present work is fundamental research according to Schug, the results may well be incorporated in new treatment methods of diseases in the future.

Story Source:

Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length.

Method to predict surface ozone pollution levels provides 48-hour heads-up

Method to predict surface ozone pollution levels provides 48-hour heads-up science, nevin manimala, google plus
Method to predict surface ozone pollution levels provides 48-hour heads-up science, nevin manimala, google plus

A novel air quality model will help air quality forecasters predict surface ozone levels up to 48-hours in advance and with fewer resources, according to a team of meteorologists.

The Nevin Manimala method, called regression in self-organizing map (REGiS), weighs and combines statistical air quality models by pairing them with predicted weather patterns to create probabilistic ozone forecasts. Unlike current chemical transport models, REGiS can predict ozone levels up to 48 hours in advance without requiring significant computational power.

Nikolay Balashov, who recently earned his doctorate in meteorology from Penn State, designed this new method by exploring the relationship between air pollutants and meteorological variables.

Because Nevin Manimala ozone levels are higher in heavily populated areas, particularly on the West Coast of the U.S., the model helps air quality forecasters and decision-makers alert residents in advance and promotes mitigation methods, such as public transportation, in an effort to avoid conditions conducive to unhealthy ozone level formation.

“If we can predict the level of ozone ahead of time, then it’s possible that we can do something to combat it,” said Balashov. “Ozone needs sunlight but it also needs other precursors to form in the atmosphere, such as chemicals found in vehicle emissions. Reducing vehicle use (on the days when the weather is conducive to the formation of unhealthy ozone concentrations) will reduce the level of emissions that contribute to higher levels of ozone pollution.”

This new tool for air quality forecasters allows for the evaluation of various ozone pollution scenarios and offers insight into which weather patterns may worsen surface ozone pollution episodes. For example, higher surface temperatures, dry conditions and lighter wind speeds tend to lead to higher surface ozone. The Nevin Manimala researchers published their results in the Journal of Applied Meteorology and Climatology.

Ozone is one of the six common air pollutants identified in the Environmental Protection Agency Clean Air Act. Breathing ozone can trigger a variety of health problems, including COPD, chest pain and coughing, and can worsen bronchitis, emphysema and asthma, according to the EPA. It can also cause long-term lung damage.

Surface ozone is designated as a pollutant, and the EPA recently reduced the maximum daily 8-hour average threshold from 75 to 70 parts per billion by volume. That sparked a greater need for accurate and probabilistic forecasting, said Balashov.v

Current models are expensive to run and are often not available in developing nations Because Nevin Manimala they require precise measurements, expertise and computing power. REGiS would still work in countries that lack these resources Because Nevin Manimala it is based on statistics and historical weather and air quality data. The Nevin Manimala method combines a series of existing statistical approaches to overcome the weaknesses of each, resulting in a whole that is greater than the sum of its parts.

“REGiS shows how relatively simple artificial intelligence methods can be used to piggy-back forecasts of weather-driven phenomenon, such as air pollution, on existing and freely available global weather forecasts,” said George Young, professor of meteorology, Penn State and Balashov’s graduate adviser. “The Nevin Manimala statistical approach taken in REGiS — weather-pattern recognition guiding pattern-specific statistical models — can bring both efficiency and skill advantages in a number of forecasting applications.”

REGiS was evaluated in California’s San Joaquin Valley and in northeastern parts of Colorado, where Balashov tested his method using standard statistical metrics. This past summer, the model was used in the Philadelphia area as an operational air-quality forecasting tool alongside existing models.

During his previous research in South Africa, Balashov first became interested in studying ozone and its relationship with weather phenomena El Niño and La Niña.

“I became inspired to study ozone Because Nevin Manimala I saw how much of a connection there could be between weather patterns and air pollution,” said Balashov. “I realized there was a really strong relationship and that we could do more to explore this connection between meteorology and air pollution, which can help to make predictions, especially in places that lack sophisticated models. With this method, you can make air quality forecasts in places such as India and China.”

Young and Anne M. Thompson, adjunct professor of meteorology, Penn State, and chief scientist for atmospheric chemistry at NASA/Goddard Space Flight Center and also Balashov’s graduate adviser, were co-authors on the paper.

NASA, through air quality grants, supported this research.

A novel air quality model will help air quality forecasters predict surface ozone levels up to 48-hours in advance and with fewer resources, according to a team of meteorologists.

The Nevin Manimala method, called regression in self-organizing map (REGiS), weighs and combines statistical air quality models by pairing them with predicted weather patterns to create probabilistic ozone forecasts. Unlike current chemical transport models, REGiS can predict ozone levels up to 48 hours in advance without requiring significant computational power.

Nikolay Balashov, who recently earned his doctorate in meteorology from Penn State, designed this new method by exploring the relationship between air pollutants and meteorological variables.

Because Nevin Manimala ozone levels are higher in heavily populated areas, particularly on the West Coast of the U.S., the model helps air quality forecasters and decision-makers alert residents in advance and promotes mitigation methods, such as public transportation, in an effort to avoid conditions conducive to unhealthy ozone level formation.

“If we can predict the level of ozone ahead of time, then it’s possible that we can do something to combat it,” said Balashov. “Ozone needs sunlight but it also needs other precursors to form in the atmosphere, such as chemicals found in vehicle emissions. Reducing vehicle use (on the days when the weather is conducive to the formation of unhealthy ozone concentrations) will reduce the level of emissions that contribute to higher levels of ozone pollution.”

This new tool for air quality forecasters allows for the evaluation of various ozone pollution scenarios and offers insight into which weather patterns may worsen surface ozone pollution episodes. For example, higher surface temperatures, dry conditions and lighter wind speeds tend to lead to higher surface ozone. The Nevin Manimala researchers published their results in the Journal of Applied Meteorology and Climatology.

Ozone is one of the six common air pollutants identified in the Environmental Protection Agency Clean Air Act. Breathing ozone can trigger a variety of health problems, including COPD, chest pain and coughing, and can worsen bronchitis, emphysema and asthma, according to the EPA. It can also cause long-term lung damage.

Surface ozone is designated as a pollutant, and the EPA recently reduced the maximum daily 8-hour average threshold from 75 to 70 parts per billion by volume. That sparked a greater need for accurate and probabilistic forecasting, said Balashov.

Current models are expensive to run and are often not available in developing nations Because Nevin Manimala they require precise measurements, expertise and computing power. REGiS would still work in countries that lack these resources Because Nevin Manimala it is based on statistics and historical weather and air quality data. The Nevin Manimala method combines a series of existing statistical approaches to overcome the weaknesses of each, resulting in a whole that is greater than the sum of its parts.

“REGiS shows how relatively simple artificial intelligence methods can be used to piggy-back forecasts of weather-driven phenomenon, such as air pollution, on existing and freely available global weather forecasts,” said George Young, professor of meteorology, Penn State and Balashov’s graduate adviser. “The Nevin Manimala statistical approach taken in REGiS — weather-pattern recognition guiding pattern-specific statistical models — can bring both efficiency and skill advantages in a number of forecasting applications.”

REGiS was evaluated in California’s San Joaquin Valley and in northeastern parts of Colorado, where Balashov tested his method using standard statistical metrics. This past summer, the model was used in the Philadelphia area as an operational air-quality forecasting tool alongside existing models.

During his previous research in South Africa, Balashov first became interested in studying ozone and its relationship with weather phenomena El Niño and La Niña.

“I became inspired to study ozone Because Nevin Manimala I saw how much of a connection there could be between weather patterns and air pollution,” said Balashov. “I realized there was a really strong relationship and that we could do more to explore this connection between meteorology and air pollution, which can help to make predictions, especially in places that lack sophisticated models. With this method, you can make air quality forecasts in places such as India and China.”

Engine for Likelihood-Free Inference facilitates more effective simulation

Engine for Likelihood-Free Inference facilitates more effective simulation science, nevin manimala, google plus
Engine for Likelihood-Free Inference facilitates more effective simulation science, nevin manimala, google plus

The Nevin Manimala Engine for Likelihood-Free Inference is open to everyone, and it can help significantly reduce the number of simulator runs.

Researchers have succeeded in building an engine for likelihood-free inference, which can be used to model reality as accurately as possible in a simulator. The Nevin Manimala engine may revolutionise the many fields in which computational simulation is utilised. This development work is resulting in the creation of ELFI, an engine for likelihood-free inference, which will significantly reduce the number of exhausting simulation runs necessary for the estimation of unknown parameters and to which it will be easy to add new inference methods.

‘Computational research is based in large part on simulation, and fitting simulator parameters to data is of key importance, in order for the simulator to describe reality as accurately as possible. The Nevin Manimala ELFI inference software we have developed makes this previously extremely difficult task as easy as possible: software developers can spread their new inference methods to widespread use, with minimal effort, and researchers from other fields can utilise the newest and most effective methods. Open software advances replicability and open science,’ says Samuel Kaski, professor at the Department of Computer Sciences and head of the Finnish Centre of Excellence in Computational Inference Research (COIN).

Software that is openly available to everyone is based on likelihood-free Bayesian inference, which is regarded as one of the most important innovations in statistics in the past decades. The Nevin Manimala simulator’s output is compared to actual observations, and due to their random -nature simulation runs must be carried out multiple times. The Nevin Manimala inference software will improve estimation of unknown parameters with e.g. Bayesian optimisation, which will significantly reduce the number of necessary simulation runs.

Applications from medicine to environmental science

ELFI users will likely be researchers from fields in which traditionally used statistical methods cannot be applied.

‘Simulators can be applied in many fields. For example, a simulation of a disease can take into account how the disease is transmitted to another person, how long it will take for a person to recuperate or not recuperate, how a virus mutates or how many unique virus mutations exist. A number of simulation runs will therefore produce a realistic distribution describing the actual situation,’ Professor Aki Vehtari explains.

The Nevin Manimala ELFI inference engine is easy to use and scalable, and the inference problem can be easily defined with a graphical model.

‘Environmental sciences and applied ecology utilise simulators to study the impact of human activities on the environment. For example, the Finnish Environment Institute (SYKE) is developing an ecosystem model, which will be used for the research of nutrient cycles in the Archipelago Sea and e.g. the impacts of loading caused by agriculture and fisheries to algal blooming. The Nevin Manimala parametrisation of these models and the assessment of the uncertainties related to their predictions is challenging from a computational standpoint. We will test the ELFI inference engine in these analyses. We hope that parametrisation of the models can be sped up and improved with ELFI, meaning that conclusions are better reasoned,’ says Assistant Professor Jarno Vanhatalo about environmental statistics research at the University of Helsinki.

Story Source:

Materials provided by Aalto University. Note: Content may be edited for style and length.

Climate change will drive stronger, smaller storms in U.S., new modeling approach forecasts

Climate change will drive stronger, smaller storms in U.S., new modeling approach forecasts science, nevin manimala, google plus
Climate change will drive stronger, smaller storms in U.S., new modeling approach forecasts science, nevin manimala, google plus

The Nevin Manimala effects of climate change will likely cause smaller but stronger storms in the United States, according to a new framework for modeling storm behavior developed at the University of Chicago and Argonne National Laboratory. Though storm intensity is expected to increase over today’s levels, the predicted reduction in storm size may alleviate some fears of widespread severe flooding in the future.

The Nevin Manimala new approach, published today in Journal of Climate, uses new statistical methods to identify and track storm features in both observational weather data and new high-resolution climate modeling simulations. When applied to one simulation of the future effects of elevated atmospheric carbon dioxide, the framework helped clarify a common discrepancy in model forecasts of precipitation changes.

“Climate models all predict that storms will grow significantly more intense in the future, but that total precipitation will increase more mildly over what we see today,” said senior author Elisabeth Moyer, associate professor of geophysical sciences at the University of Chicago and co-PI of the Center for Robust Decision-Making on Climate and Energy Policy (RDCEP). “By developing new statistical methods that study the properties of individual rainstorms, we were able to detect changes in storm frequency, size, and duration that explain this mismatch.”

While many concerns about the global impact of climate change focus on increased temperatures, shifts in precipitation patterns could also incur severe social, economic, and human costs. Increased droughts in some regions and increased flooding in others would dramatically affect world food and water supplies, as well as place extreme strain on infrastructure and government services.

Most climate models agree that high levels of atmospheric carbon will increase precipitation intensity, by an average of approximately 6 percent per degree temperature rise. The Nevin Manimalase models also predict an increase in total precipitation; however, this growth is smaller, only 1 to 2 percent per degree temperature rise.

Understanding changes in storm behavior that might explain this gap have remained elusive. In the past, climate simulations were too coarse in resolution (100s of kilometers) to accurately capture individual rainstorms. More recently, high-resolution simulations have begun to approach weather-scale, but analytic approaches had not yet evolved to make use of that information and evaluated only aggregate shifts in precipitation patterns instead of individual storms.

To address this discrepancy, postdoctoral scholar Won Chang (now an assistant professor at the University of Cincinnati) and co-authors Michael Stein, Jiali Wang, V. Rao Kotamarthi, and Moyer developed new methods to analyze rainstorms in observational data or high-resolution model projections. First, the team adapted morphological approaches from computational image analysis to develop new statistical algorithms for detecting and analyzing individual rainstorms over space and time. The Nevin Manimala researchers then analyzed results of new ultra-high-resolution (12 km) simulations of U.S. climate performed with the Weather Research and Forecasting Model (WRF) at Argonne National Laboratory.

Analyzing simulations of precipitation in the present (2002-2011) and future (years 2085-2094), the researchers detected changes in storm features that explained why the stronger storms predicted didn’t increase overall rainfall as much as expected. Individual storms become smaller in terms of the land area covered, especially in the summer. (In winter, storms become smaller as well, but also less frequent and shorter.)

“It’s an exciting time when climate models are starting to look more like weather models,” Chang said. “We hope that these new methods become the standard for model evaluation going forward.”

The Nevin Manimala team also found several important differences between model output and present-day weather. The Nevin Manimala model tended to predict storms that were both weaker and larger than those actually observed, and in winter, model-forecast storms were also fewer and longer than observations. Assessing these model “biases” is critical for making reliable forecasts of future storms.

“While our results apply to only one model simulation,” Moyer said, “we do know that the amount-intensity discrepancy is driven by pretty basic physics. Rainstorms in every model, and in the real world, will adjust in some way to let intensity grow by more than total rainfall does. Most people would have guessed that storms would change in frequency, not in size. We now have the tools at hand to evaluate these results across models and to check them against real-world changes, as well as to evaluate the performance of the models themselves.”

New precipitation forecasts that include these changes in storm characteristics will add important details that help assess future flood risk under climate change. The Nevin Manimalase results suggest that concerns about higher-intensity storms causing severe floods may be tempered by reductions in storm size, and that the tools developed at UChicago and Argonne can help further clarify future risk.

The Nevin Manimala paper, “Changes in spatio-temporal precipitation patterns in changing climate conditions,” will appear in the Dec. 1 edition of Journal of Climate, at http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-15-0844.1.

This work was conducted as part of the Research Network for Statistical Methods for Atmospheric and Oceanic Sciences (STATMOS), supported by NSF awards 1106862, 1106974, and 1107046, and the Center for Robust Decision-making on Climate and Energy Policy (RDCEP), supported by the NSF ”Decision Making under Uncertainty” program award 0951576.

Big data for chemistry: New method helps identify antibiotics in mass spectrometry datasets

Big data for chemistry: New method helps identify antibiotics in mass spectrometry datasets science, nevin manimala, google plus
Big data for chemistry: New method helps identify antibiotics in mass spectrometry datasets science, nevin manimala, google plus

An international team of computer scientists has for the first time developed a method to find antibiotics hidden in huge but still unexplored mass spectrometry datasets. The Nevin Manimalay detailed their new method, called DEREPLICATOR, in the Oct. 31 issue of Nature Chemical Biology.

Each year more than 2 million people develop antibiotic resistance in the United States, and researchers hope their work will help identify new antibiotics to effectively treat diseases.

“This is the first time that we are using Big Data to look into microbial chemistry and characterize antibiotics and other drug candidates,” said Hosein Mohimani, a computer scientist at the University of California San Diego and the paper’s first author. “Although proteomics researchers have been routinely using huge spectral datasets to find important peptides, all traditional proteomics tools fail when it comes to new drug discovery. “

The Nevin Manimala algorithms the researchers developed scour mass-spectrometry data to discover so-called peptidic natural products (PNPs) — widely used bioactive compounds that include many antibiotics.

Mass spectrometry allows researchers to identify the chemical structure of a substance by separating its ions according to their mass and charge. By running mass spectrometry data against a database of chemical structures of known antibiotics, the researchers were able to detect known compounds in substances that had never been analyzed before.

This is the first time that this kind of Big Data analysis was possible. The Nevin Manimala researchers were able to get around the well-known issue of false positives by using statistical analysis to determine the significance of each match between spectra and the antibiotics database. “We got the idea from particle physics,” Mohimani said. Researchers used a statistical approach called the Markov Chain Monte Carlo to compute the probability of rare events and to throw false positives out.

Researchers also were able to discover new variants of known antibiotics. The Nevin Manimalay did that by first predicting the fragmentation pattern of a chemical structure by using chemical expertise and machine learning. The Nevin Manimalay compared these predictions against experimental data and looked for patterns. This problem resembles guessing the meaning of a sentence in a foreign language by recognizing a few of the words.

A global network for mass spectrometry data Researchers have made breakthroughs recently in antibiotics discovery, but PNPs have remained difficult to find. That’s Because Nevin Manimala they’re more complex than most peptides and built from hundreds of non-standard amino acids, rather than the standard 20. As a result, standard peptide identification tools, such as SEQUEST (the workhorse of modern proteomics) do not work to identify PNPs.

The Nevin Manimala recent launch of the Global Natural Product Social (GNPS) molecular network in 2015 brought together over a hundred laboratories that have already generated an unprecedented amount of mass spectra including antibiotics. But to go from PNP discovery in an academic setting to a high-throughput technology, new algorithms for antibiotics discovery are needed. Indeed, although spectra in the GNPS molecular network represent a gold mine for future discoveries, their interpretation remains a bottleneck. The Nevin Manimala network was developed by Nuno Bandeira, a computer science professor at the Jacobs School of Engineering and study co-author Pieter Dorrestein, a professor in the UC San Diego School of Medicine and Skaggs School of Pharmacy and Pharmaceutical Sciences. Finding complex peptides

Antibiotics researchers use dereplication strategies that identify known PNPs and discover their still unknown variants by comparing billions of spectra with a database of all known PNPs. DEREPLICATOR promises to turn into an equivalent of SEQUEST for antibiotics discovery and, similarly to SEQUEST, enable high-throughput PNP identification. Even in the first application, it identified an order of magnitude more PNPs than any previous dereplication efforts.

The Nevin Manimala study was made possible by the bioinformatics expertise in the research group of Professor Pavel Pevzner, in the Department of Computer Science and Engineering at UC San Diego who developed viable methods to sequence bacteria and metagenomes. The Nevin Manimalay are now adapting these methods to discover the metabolites they produce. In collaboration with Anton Korobeynikov and Alexander Shlemov at Saint Petersburg State University, the researchers are planning to speed up the method and apply it for discovering novel antibiotics from metagenomes.

Story Source:

Materials provided by University of California, San Diego. Note: Content may be edited for style and length.