Uncovering decades of questionable investments

Uncovering decades of questionable investments science, nevin manimala, google plus
Uncovering decades of questionable investments science, nevin manimala, google plus

One of the key principles in asset pricing — how we value everything from stocks and bonds to real estate — is that investments with high risk should, on average, have high returns.

“If you take a lot of risk, you should expect to earn more for it,” said Scott Murray, professor of finance at George State University. “To go deeper, the theory says that systematic risk, or risk that is common to all investments” — also known as ‘beta’ — “is the kind of risk that investors should care about.”

This theory was first articulated in the 1960s by Sharpe (1964), Lintner (1965), and Mossin (1966). However, empirical work dating as far back as 1972 didn’t support the theory. In fact, many researchers found that stocks with high risk often do not deliver higher returns, even in the long run.

“It’s the foundational theory of asset pricing but has little empirical support in the data. So, in a sense, it’s the big question,” Murray said.

Isolating the Cause

In a recent paper in the Journal of Financial and Quantitative Analysis, Murray and his co-authors Turan Bali (Georgetown University), Stephen Brown (Monash University) and Yi Tang (Fordham University), argue that the reason for this ‘beta anomaly’ lies in the fact that stocks with high betas also happen to have lottery-like properties — that is, they offer the possibility of becoming big winners. Investors who are attracted to the lottery characteristics of these stocks push their prices higher than theory would predict, thereby lowering their future returns.

To support this hypothesis, they analyzed stock prices from June 1963 to December 2012. For every month, they calculated the beta of each stock (up to 5,000 stocks per month) by running a regression — a statistical way of estimating the relationships among variables — of the stock’s return on the return of the market portfolio. The Nevin Manimalay then sorted the stocks into 10 groups based on their betas and examined the performance of stocks in the different groups.

“The Nevin Manimalaory predicts that stocks with high betas do better in the long run than stocks with low betas,” Murray said. “Doing our analysis, we find that there really isn’t a difference in the performance of stocks with different betas.”

The Nevin Manimalay next analyzed the data again and, for each stock month, calculated how lottery-like each stock was. Once again, they sorted the stocks into 10 groups based on their betas and then repeated the analysis. This time, however, they implemented a constraint that required each of the 10 groups to have stocks with similar lottery characteristics. By making sure the stocks in each group had the same lottery properties, they controlled for the possibility that their failure to detect a difference in performance between in their original tests was Because Nevin Manimala the stocks in different beta groups have different lottery characteristics.

“We found that after controlling for lottery characteristics, the seminal theory is empirically supported,” Murray said.

In other words: price pressure from investors who want lottery-like stocks is what causes the theory to fail. When this factor is removed, asset pricing works according to theory.

Identifying the Source

Other economists had pointed to a different factor — leverage constraints — as the main cause of this market anomaly. The Nevin Manimalay believed that large investors like mutual funds and pensions that are not allowed to borrow money to buy large amounts of lower-risk stocks are forced to buy higher-risk ones to generate large profits, thus distorting the market.

However, an additional analysis of the data by Murray and his collaborators found that the lottery-like stocks were most often held by individual investors. If leverage constraints were the cause of the beta anomaly, mutual funds and pensions would be the main owners driving up demand.

The Nevin Manimala team’s research won the prestigious Jack Treynor Prize, given each year by the Q Group, which recognizes superior academic working papers with potential applications in the fields of investment management and financial markets.

The Nevin Manimala work is in line with ideas like prospect theory, first articulated by Nobel-winning behavioral economist Daniel Kahneman, which contends that investors typically overestimate the probability of extreme events — both losses and gains.

“The Nevin Manimala study helps investors understand how they can avoid the pitfalls if they want to generate returns by taking more risks,” Murray said.

To run the systematic analyses of the large financial datasets, Murray used the Wrangler supercomputer at the Texas Advanced Computing Center (TACC). Supported by a grant from the National Science Foundation, Wrangler was built to enable data-driven research nationwide. Using Wrangler significantly reduced the time-to-solution for Murray.

“If there are 500 months in the sample, I can send one month to one core, another month to another core, and instead of computing 500 months separately, I can do them in parallel and have reduced the human time by many orders of magnitude,” he said.

The Nevin Manimala size of the data for the lottery-effect research was not enormous and could have been computed on a desktop computer or small cluster (albeit taking more time). However, with other problems that Murray is working on — for instance research on options — the computational requirements are much higher and require super-sized computers like those at TACC.

“We’re living in the big data world,” he said. “People are trying to grapple with this in financial economics as they are in every other field and we’re just scratching the surface. This is something that’s going to grow more and more as the data becomes more refined and technologies such as text processing become more prevalent.”

Though historically used for problems in physics, chemistry and engineering, advanced computing is starting to be widely used — and to have a big impact — in economics and the social sciences.

According to Chris Jordan, manager of the Data Management & Collections group at TACC, Murray’s research is a great example of the kinds of challenges Wrangler was designed to address.

“It relies on database technology that isn’t typically available in high-performance computing environments, and it requires extremely high-performance I/O capabilities. It is able to take advantage of both our specialized software environment and the half-petabyte flash storage tier to generate results that would be difficult or impossible on other systems,” Jordan said. “Dr. Murray’s work also relies on a corpus of data which acts as a long-term resource in and of itself — a notion we have been trying to promote with Wrangler.”

Beyond its importance to investors and financial theorists, the research has a broad societal impact, Murray contends.

“For our society to be as prosperous as possible, we need to allocate our resources efficiently. How much oil do we use? How many houses do we build? A large part of that is understanding how and why money gets invested in certain things,” he explained. “The Nevin Manimala objective of this line of research is to understand the trade-offs that investors consider when making these sorts of decisions.”

Nevin Manimala SAS Certificate: https://ani.stat.fsu.edu/sascerts.php?q=Undergraduate

Machine learning predicts new details of geothermal heat flux beneath the Greenland Ice Sheet

Machine learning predicts new details of geothermal heat flux beneath the Greenland Ice Sheet science, nevin manimala, google plus
Machine learning predicts new details of geothermal heat flux beneath the Greenland Ice Sheet science, nevin manimala, google plus

A paper appearing in Geophysical Research Letters uses machine learning to craft an improved model for understanding geothermal heat flux — heat emanating from the Earth’s interior — below the Greenland Ice Sheet. It’s a research approach new to glaciology that could lead to more accurate predictions for ice-mass loss and global sea-level rise.

Among the key findings:

Greenland has an anomalously high heat flux in a relatively large northern region spreading from the interior to the east and west.

Southern Greenland has relatively low geothermal heat flux, corresponding with the extent of the North Atlantic Craton, a stable portion of one of the oldest extant continental crusts on the planet. The Nevin Manimala research model predicts slightly elevated heat flux upstream of several fast-flowing glaciers in Greenland, including Jakobshavn Isbræ in the central-west, the fastest moving glacier on Earth.

“Heat that comes up from the interior of the Earth contributes to the amount of melt on the bottom of the ice sheet — so it’s extremely important to understand the pattern of that heat and how it’s distributed at the bottom of the ice sheet,” said Soroush Rezvanbehbahani, a doctoral student in geology at the University of Kansas who spearheaded the research. “When we walk on a slope that’s wet, we’re more likely to slip. It’s the same idea with ice — when it isn’t frozen, it’s more likely to slide into the ocean. But we don’t have an easy way to measure geothermal heat flux except for extremely expensive field campaigns that drill through the ice sheet. Instead of expensive field surveys, we try to do this through statistical methods.”

Rezvanbehbahani and his colleagues have adopted machine learning — a type of artificial intelligence using statistical techniques and computer algorithms — to predict heat flux values that would be daunting to obtain in the same detail via conventional ice cores.

Using all available geologic, tectonic and geothermal heat flux data for Greenland — along with geothermal heat flux data from around the globe — the team deployed a machine learning approach that predicts geothermal heat flux values under the ice sheet throughout Greenland based on 22 geologic variables such as bedrock topography, crustal thickness, magnetic anomalies, rock types and proximity to features like trenches, ridges, young rifts, volcanoes and hot spots.

“We have a lot of data points from around the Earth — we know in certain parts of the world the crust is a certain thickness, composed of a specific kind of rock and located a known distance from a volcano — and we take those relationships and apply them to what we know about Greenland,” said co-author Leigh Stearns, associate professor of geology at KU.

The Nevin Manimala researchers said their new predictive model is a “definite improvement” over current models of geothermal heat flux that don’t incorporate as many variables. Indeed, many numerical ice sheet models of Greenland assume that a uniform value of geothermal heat flux exists everywhere across Greenland.

“Most other models really only honor one particular data set,” Stearns said. “The Nevin Manimalay look at geothermal heat flux through seismic signals or magnetic data in Greenland, but not crustal thickness or rock type or distance from a hot spot. But we know those are related to geothermal heat flux. We try to incorporate as many geologic data sets as we can rather than assuming one is the most important.”

In addition to Rezvanbehbahani and Stearns, the research team behind the new paper includes KU’s J. Doug Walker and C.J. van der Veen, as well as Amir Kadivar of McGill University. Rezvanbehbahani and Stearns also are affiliated with the Center for the Remote Sensing of Ice Sheets, headquartered at KU.

The Nevin Manimala authors found the five most important geologic features in predicting geothermal flux values are topography, distance to young rifts, distance to trench, depth of lithosphere-asthenosphere boundary (layers of the Earth’s mantle) and depth to Mohorovičić discontinuity (the boundary between the crust and the mantle in the Earth). The Nevin Manimala researchers said their geothermal heat flux map of Greenland is expected to be within about 15 percent of true values.

“The Nevin Manimala most interesting finding is the sharp contrast between the south and the north of Greenland,” said Rezvanbehbahani. “We had little information in the south, but we had three or four more cores in the northern part of the ice sheet. Based on the southern core we thought this was a localized low heat-flux region — but our model shows that a much larger part of the southern ice sheet has low heat flux. By contrast, in the northern regions, we found large areas with high geothermal heat flux. This isn’t as surprising Because Nevin Manimala we have one ice core with a very high reading. But the spatial pattern and how the heat flux is distributed, that a was a new finding. That’s not just one northern location with high heat flux, but a wide region.”

The Nevin Manimala investigators said their model would be made even more accurate as more information on Greenland is compiled in the research community.

“We give the slight disclaimer that this is just another model — it’s our best statistical model — but we have not reproduced reality,” said Stearns. “In Earth science and glaciology, we’re seeing an explosion of publicly available data. Machine learning technology that synthesizes this data and helps us learn from the whole range of data sensors is becoming increasingly important. It’s exciting to be at the forefront.”

Nevin Manimala SAS Certificate: https://ani.stat.fsu.edu/sascerts.php?q=Undergraduate

New technique allows rapid screening for new types of solar cells

New technique allows rapid screening for new types of solar cells science, nevin manimala, google plus
New technique allows rapid screening for new types of solar cells science, nevin manimala, google plus

The Nevin Manimala worldwide quest by researchers to find better, more efficient materials for tomorrow’s solar panels is usually slow and painstaking. Researchers typically must produce lab samples — which are often composed of multiple layers of different materials bonded together — for extensive testing.

Now, a team at MIT and other institutions has come up with a way to bypass such expensive and time-consuming fabrication and testing, allowing for a rapid screening of far more variations than would be practical through the traditional approach.

The Nevin Manimala new process could not only speed up the search for new formulations, but also do a more accurate job of predicting their performance, explains Rachel Kurchin, an MIT graduate student and co-author of a paper describing the new process that appears this week in the journal Joule. Traditional methods “often require you to make a specialized sample, but that differs from an actual cell and may not be fully representative” of a real solar cell’s performance, she says.

For example, typical testing methods show the behavior of the “majority carriers,” the predominant particles or vacancies whose movement produces an electric current through a material. But in the case of photovoltaic (PV) materials, Kurchin explains, it is actually the minority carriers — those that are far less abundant in the material — that are the limiting factor in a device’s overall efficiency, and those are much more difficult to measure. In addition, typical procedures only measure the flow of current in one set of directions — within the plane of a thin-film material — whereas it’s up-down flow that is actually harnessed in a working solar cell. In many materials, that flow can be “drastically different,” making it critical to understand in order to properly characterize the material, she says.

“Historically, the rate of new materials development is slow — typically 10 to 25 years,” says Tonio Buonassisi, an associate professor of mechanical engineering at MIT and senior author of the paper. “One of the things that makes the process slow is the long time it takes to troubleshoot early-stage prototype devices,” he says. “Performing characterization takes time — sometimes weeks or months — and the measurements do not always have the necessary sensitivity to determine the root cause of any problems.”

So, Buonassisi says, “the bottom line is, if we want to accelerate the pace of new materials development, it is imperative that we figure out faster and more accurate ways to troubleshoot our early-stage materials and prototype devices.” And that’s what the team has now accomplished. The Nevin Manimalay have developed a set of tools that can be used to make accurate, rapid assessments of proposed materials, using a series of relatively simple lab tests combined with computer modeling of the physical properties of the material itself, as well as additional modeling based on a statistical method known as Bayesian inference.

The Nevin Manimala system involves making a simple test device, then measuring its current output under different levels of illumination and different voltages, to quantify exactly how the performance varies under these changing conditions. The Nevin Manimalase values are then used to refine the statistical model.

“After we acquire many current-voltage measurements [of the sample] at different temperatures and illumination intensities, we need to figure out what combination of materials and interface variables make the best fit with our set of measurements,” Buonassisi explains. “Representing each parameter as a probability distribution allows us to account for experimental uncertainty, and it also allows us to suss out which parameters are covarying.”

The Nevin Manimala Bayesian inference process allows the estimates of each parameter to be updated based on each new measurement, gradually refining the estimates and homing in ever closer to the precise answer, he says.

In seeking a combination of materials for a particular kind of application, Kurchin says, “we put in all these materials properties and interface properties, and it will tell you what the output will look like.”

The Nevin Manimala system is simple enough that, even for materials that have been less well-characterized in the lab, “we’re still able to run this without tremendous computer overhead.” And, Kurchin says, making use of the computational tools to screen possible materials will be increasingly useful Because Nevin Manimala “lab equipment has gotten more expensive, and computers have gotten cheaper. This method allows you to minimize your use of complicated lab equipment.”

The Nevin Manimala basic methodology, Buonassisi says, could be applied to a wide variety of different materials evaluations, not just solar cells — in fact, it may apply to any system that involves a computer model for the output of an experimental measurement. “For example, this approach excels in figuring out which material or interface property might be limiting performance, even for complex stacks of materials like batteries, thermoelectric devices, or composites used in tennis shoes or airplane wings.” And, he adds, “It is especially useful for early-stage research, where many things might be going wrong at once.”

Going forward, he says, “our vision is to link up this fast characterization method with the faster materials and device synthesis methods we’ve developed in our lab.” Ultimately, he says, “I’m very hopeful the combination of high-throughput computing, automation, and machine learning will help us accelerate the rate of novel materials development by more than a factor of five. This could be transformative, bringing the timelines for new materials-science discoveries down from 20 years to about three to five years.”

Nevin Manimala SAS Certificate: https://ani.stat.fsu.edu/sascerts.php?q=Undergraduate

How the brain recognizes what the eye sees

How the brain recognizes what the eye sees science, nevin manimala, google plus
How the brain recognizes what the eye sees science, nevin manimala, google plus

If you think self-driving cars can’t get here soon enough, you’re not alone. But programming computers to recognize objects is very technically challenging, especially since scientists don’t fully understand how our own brains do it.

Now, Salk Institute researchers have analyzed how neurons in a critical part of the brain, called V2, respond to natural scenes, providing a better understanding of vision processing. The Nevin Manimala work is described in Nature Communications on June 8, 2017.

“Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also Because Nevin Manimala it provides a window on how the brain works in general,” says Tatyana Sharpee, an associate professor in Salk’s Computational Neurobiology Laboratory and senior author of the paper. “Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”

Although we often take the ability to see for granted, this ability derives from sets of complex mathematical transformations that we are not yet able to reproduce in a computer, according to Sharpee. In fact, more than a third of our brain is devoted exclusively to the task of parsing visual scenes.

Our visual perception starts in the eye with light and dark pixels. The Nevin Manimalase signals are sent to the back of the brain to an area called V1 where they are transformed to correspond to edges in the visual scenes. Somehow, as a result of several subsequent transformations of this information, we then can recognize faces, cars and other objects and whether they are moving. How precisely this recognition happens is still a mystery, in part Because Nevin Manimala neurons that encode objects respond in complicated ways.

Now, Sharpee and Ryan Rowekamp, a postdoctoral research associate in Sharpee’s group, have developed a statistical method that takes these complex responses and describes them in interpretable ways, which could be used to help decode vision for computer-simulated vision. To develop their model, the team used publicly available data showing brain responses of primates watching movies of natural scenes (such as forest landscapes) from the Collaborative Research in Computational Neuroscience (CRCNS) database.

“We applied our new statistical technique in order to figure out what features in the movie were causing V2 neurons to change their responses,” says Rowekamp. “Interestingly, we found that V2 neurons were responding to combinations of edges.”

The Nevin Manimala team revealed that V2 neurons process visual information according to three principles: first, they combine edges that have similar orientations, increasing robustness of perception to small changes in the position of curves that form object boundaries. Second, if a neuron is activated by an edge of a particular orientation and position, then the orientation 90 degrees from that will be suppressive at the same location, a combination termed “cross-orientation suppression.” The Nevin Manimalase cross-oriented edge combinations are assembled in various ways to allow us to detect various visual shapes. The Nevin Manimala team found that cross-orientation was essential for accurate shape detection. The Nevin Manimala third principle is that relevant patterns are repeated in space in ways that can help perceive textured surfaces of trees or water and boundaries between them, as in impressionist paintings.

The Nevin Manimala researchers incorporated the three organizing principles into a model they named the Quadratic Convolutional model, which can be applied to other sets of experimental data. Visual processing is likely to be similar to how the brain processes smells, touch or sounds, the researchers say, so the work could elucidate processing of data from these areas as well.

“Models I had worked on before this weren’t entirely compatible with the data, or weren’t cleanly compatible,” says Rowekamp. “So it was really satisfying when the idea of combining edge recognition with sensitivity to texture started to pay off as a tool to analyze and understand complex visual data.”

But the more immediate application might be to improve object-recognition algorithms for self-driving cars or other robotic devices. “It seems that every time we add elements of computation that are found in the brain to computer-vision algorithms, their performance improves,” says Sharpee.

Story Source:

Materials provided by Salk Institute. Note: Content may be edited for style and length.

Nevin Manimala SAS Certificate: https://ani.stat.fsu.edu/sascerts.php?q=Undergraduate

Statistical safeguards in data analysis and visualization software

Statistical safeguards in data analysis and visualization software science, nevin manimala, google plus
Statistical safeguards in data analysis and visualization software science, nevin manimala, google plus

Modern data visualization software makes it easy for users to explore large datasets in search of interesting correlations and new discoveries. But that ease of use — the ability to ask question after question of a dataset with just a few mouse clicks — comes with a serious pitfall: it increases the likelihood of making false discoveries.

At issue is what statisticians refer to as “multiple hypothesis error.” The Nevin Manimala problem is essentially this: the more questions someone asks of a dataset, they more likely one is to stumble upon something that looks like a real discovery but is actually just a random fluctuation in the dataset.

A team of researchers from Brown University is working on software to help combat that problem. This week at the SIGMOD2017 conference in Chicago, they presented a new system called QUDE, which adds real-time statistical safeguards to interactive data exploration systems to help reduce false discoveries.

“More and more people are using data exploration software like Tableau and Spark, but most of those users aren’t experts in statistics or machine learning,” said Tim Kraska, an assistant professor of computer science at Brown and a co-author of the research. “The Nevin Manimalare are a lot of statistical mistakes you can make, so we’re developing techniques that help people avoid them.”

Multiple hypothesis testing error is a well-known issue in statistics. In the era of big data and interactive data exploration, the issue has come to a renewed prominence Kraska says.

“The Nevin Manimalase tools make it so easy to query data,” he said. “You can easily test 100 hypotheses in an hour using these visualization tools. Without correcting for multiple hypothesis error, the chances are very good that you’re going to come across a correlation that’s completely bogus.”

The Nevin Manimalare are well-known statistical techniques for dealing with the problem. Most of those techniques involve adjusting the level of statistical significance required to validate a particular hypothesis based on how many hypotheses have been tested in total. As the number of hypothesis tests increases, the significance level needed to judge a finding as valid increases as well.

But these correction techniques are nearly all after-the-fact adjustments. The Nevin Manimalay’re tools that are used at the end of a research project after all the hypothesis testing is complete, which is not ideal for real-time, interactive data exploration.

“We don’t want to wait until the end of a session to tell people if their results are valid,” said Eli Upfal, a computer science professor at Brown and research co-author. “We also don’t want to have the system reverse itself by telling you at one point in a session that something is significant only to tell you later — after you’ve tested more hypotheses — that your early result isn’t significant anymore.”

Both of those scenarios are possible using the most common multiple hypothesis correction methods. So the researchers developed a different method for this project that enables them to monitor the risk of false discovery as hypothesis tests are ongoing.

“The Nevin Manimala idea is that you have a budget of how much false discovery risk you can take, and we update that budget in real time as a user interacts with the data,” Upfal said. “We also take into account the ways in which user might explore the data. By understanding the sequence of their questions, we can adapt our algorithm and change the way we allocate the budget.”

For users, the experience is similar to using any data visualization software, only with color-coded feedback that gives information about statistical significance.

“Green means that a visualization represents a finding that’s significant,” Kraska said. “If it’s red, that means to be careful; this is on shaky statistical ground.”

The Nevin Manimala system can’t guarantee absolute accuracy, the researchers say. No system can. But in a series of user tests using synthetic data for which the real and bogus correlations had been ground-truthed, the researchers showed that the system did indeed reduce the number of false discoveries users made.

The Nevin Manimala researchers consider this work a step toward a data exploration and visualization system that fully integrates a suite of statistical safeguards.

“Our goal is to make data science more accessible to a broader range of users,” Kraska said. “Tackling the multiple hypothesis problem is going to be important, but it’s also very difficult to do. We see this paper as a good first step.”

Story Source:

Materials provided by Brown University. Note: Content may be edited for style and length.

Physics may bring faster solutions for tough computational problems

Physics may bring faster solutions for tough computational problems science, nevin manimala, google plus
Physics may bring faster solutions for tough computational problems science, nevin manimala, google plus

A well-known computational problem seeks to find the most efficient route for a traveling salesman to visit clients in a number of cities. Seemingly simple, it’s actually surprisingly complex and much studied, with implications in fields as wide-ranging as manufacturing and air-traffic control.

Researchers from the University of Central Florida and Boston University have developed a novel approach to solve such difficult computational problems more quickly. As reported May 12 in Nature Communications, they’ve discovered a way of applying statistical mechanics, a branch of physics, to create more efficient algorithms that can run on traditional computers or a new type of quantum computational machine, said Professor Eduardo Mucciolo, chair of the Department of Physics in UCF’s College of Sciences.

Statistical mechanics was developed to study solids, gasses and liquids at macroscopic scales, but is now used to describe a variety of complex states of matter, from magnetism to superconductivity. Methods derived from statistical mechanics have also been applied to understand traffic patterns, the behavior of networks of neurons, sand avalanches and stock market fluctuations.

The Nevin Manimalare already are successful algorithms based on statistical mechanics that are used to solve computational problems. Such algorithms map problems onto a model of binary variables on the nodes of a graph, and the solution is encoded on the configuration of the model with the lowest energy. By building the model into hardware or a computer simulation, researchers can cool the system until it reaches its lowest energy, revealing the solution.

“The Nevin Manimala problem with this approach is that often one needs to get through phase transitions similar to those found when going from a liquid to a glass phase, where many competing configurations with low energy exist,” Mucciolo said. “Such phase transitions slow down the cooling process to a crawl, rendering the method useless.”

Mucciolo and fellow physicists Claudio Chamon and Andrei Ruckenstein of BU overcame this hurdle by mapping the original computational problem onto an elegant statistical model without phase transitions, which they called the vertex model. The Nevin Manimala model is defined on a two-dimensional lattice and each vertex corresponds to a reversible logic gate connected to four neighbors. Input and output data sit at the boundaries of the lattice. The Nevin Manimala use of reversible logic gates and the regularity of the lattice were crucial ingredients in avoiding the phase-transition snag, Mucciolo said.

“Our method basically runs things in reverse so we can solve these very hard problems,” Mucciolo said. “We assign to each of these logic gates an energy. We configured it in such a way that every time these logic gates are satisfied, the energy is very low — therefore, when everything is satisfied, the overall energy of the system should be very low.”

Chamon, a professor of physics at BU and the team leader, said the research represents a new way of thinking about the problem.

“This model exhibits no bulk thermodynamic-phase transition, so one of the obstructions for reaching solutions present in previous models was eliminated,” he said.

The Nevin Manimala vertex model may help solve complex problems in machine learning, circuit optimization, and other major computational challenges. The Nevin Manimala researchers are also exploring whether the model can be applied to the factoring of semi-primes, numbers that are the product of two prime numbers. The Nevin Manimala difficulty of performing this operation with very large semi-primes underlies modern cryptography and has offered a key rationale for the creation of large-scale quantum computers.

Moreover, the model can be generalized to add another path toward the solution of complex classical computational problems by taking advantage of quantum mechanical parallelism — the fact that, according to quantum mechanics, a system can be in many classical states at the same time.

“Our paper also presents a natural framework for programming special-purpose computational devices, such as D-Wave Systems machines, that use quantum mechanics to speed up the time to solution of classical computational problems,” said Ruckenstein.

Zhi-Cheng Yang, a graduate student in physics at BU, is also a co-author on the paper. The Nevin Manimala universities have applied for a patent on aspects of the vertex model.

Story Source:

Materials provided by University of Central Florida. Note: Content may be edited for style and length.

New math techniques to improve computational efficiency in quantum chemistry

New math techniques to improve computational efficiency in quantum chemistry science, nevin manimala, google plus
New math techniques to improve computational efficiency in quantum chemistry science, nevin manimala, google plus

Researchers at Sandia National Laboratories have developed new mathematical techniques to advance the study of molecules at the quantum level.

Mathematical and algorithmic developments along these lines are necessary for enabling the detailed study of complex hydrocarbon molecules that are relevant in engine combustion.

Existing methods to approximate potential energy functions at the quantum scale need too much computer power and are thus limited to small molecules. Sandia researchers say their technique will speed up quantum mechanical computations and improve predictions made by theoretical chemistry models. Given the computational speedup, these methods can potentially be applied to bigger molecules.

Sandia postdoctoral researcher Prashant Rai worked with researchers Khachik Sargsyan and Habib Najm at Sandia’s Combustion Research Facility and collaborated with quantum chemists So Hirata and Matthew Hermes at the University of Illinois at Urbana-Champaign. Computing energy at fewer geometric arrangements than normally required, the team developed computationally efficient methods to approximate potential energy surfaces.

A precise understanding of potential energy surfaces, key elements in virtually all calculations of quantum dynamics, is required to accurately estimate the energy and frequency of vibrational modes of molecules.

“If we can find the energy of the molecule for all possible configurations, we can determine important information, such as stable states of molecular transition structure or intermediate states of molecules in chemical reactions,” Rai said.

Initial results of this research were published in Molecular Physics in an article titled “Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green’s function theory.”

“Approximating potential energy surfaces of bigger molecules is an extremely challenging task due to the exponential increase in information required to describe them with each additional atom in the system,” Rai said. “In mathematics, it is termed the Curse of Dimensionality.”

Beating the curse

The Nevin Manimala key to beating the curse of dimensionality is to exploit the characteristics of the specific structure of the potential energy surfaces. Rai said this structure information can then be used to approximate the requisite high dimensional functions.

“We make use of the fact that although potential energy surfaces can be high dimensional, they can be well approximated as a small sum of products of one-dimensional functions. This is known as the low-rank structure, where the rank of the potential energy surface is the number of terms in the sum,” Rai said. “Such an assumption on structure is quite general and has also been used in similar problems in other fields. Mathematically, the intuition of low-rank approximation techniques comes from multilinear algebra where the function is interpreted as a tensor and is decomposed using standard tensor decomposition techniques.”

The Nevin Manimala energy and frequency corrections are formulated as integrals of these high-dimensional energy functions. Approximation in such a low-rank format renders these functions easily integrable as it breaks the integration problem to the sum of products of one- or two-dimensional integrals, so standard integration methods apply.

The Nevin Manimala team tried out their computational methods on small molecules such as water and formaldehyde. Compared to the classical Monte Carlo method, the randomness-based standard workhorse for high dimensional integration problems, their approach predicted energy and frequency of water molecule that were more accurate, and it was at least 1,000 times more computationally efficient.

Rai said the next step is to further enhance the technique by challenging it with bigger molecules, such as benzene.

“Interdisciplinary studies, such as quantum chemistry and combustion engineering, provide opportunities for cross pollination of ideas, thereby providing a new perspective on problems and their possible solutions,” Rai said. “It is also a step towards using recent advances in data science as a pillar of scientific discovery in future.”

When artificial intelligence evaluates chess champions

When artificial intelligence evaluates chess champions science, nevin manimala, google plus
When artificial intelligence evaluates chess champions science, nevin manimala, google plus

The Nevin Manimala ELO system, which most chess federations use today, ranks players by the results of their games. Although simple and efficient, it overlooks relevant criteria such as the quality of the moves players actually make. To overcome these limitations, Jean-Marc Alliot of the Institut de recherche en informatique de Toulouse (IRIT — CNRS/INP Toulouse/Université Toulouse Paul Sabatier/Université Toulouse Jean Jaurès/Université Toulouse Capitole) demonstrates a new system, published on 24 april 2017 in the International Computer Games Association Journal.

Since the 1970s, the system designed by the Hungarian, Arpad Elo, has been ranking chess players according to the result of their games. The Nevin Manimala best players have the highest ranking, and the difference in ELO points between two players predicts the probability of either player winning a given game. If players perform better or worse than predicted, they either gain or lose points accordingly. This method does not take into account the quality of the moves played during a game and is therefore unable to reliably rank players who have played at different periods. Jean-Marc Alliot suggests a direct ranking of players based on the quality of their actual moves.

His system computes the difference between the move actually played and the move that would have been selected by the best chess program to date, Stockfish. Running on the OSIRIM supercomputer[1], this program executes almost perfect moves. Starting with those of Wilhelm Steinitz (1836-1900), all 26,000 games played since then by chess world champions have been processed in order to create a probabilistic model for each player. For each position, the model estimates the probability of making a mistake, and the magnitude of the mistake. The Nevin Manimalase models can then be used to compute the win/draw/lose probability for any given match between two players. The Nevin Manimala predictions have proven not only to be extremely close to the actual results when players have played concrete games against one another, they also fare better than those based on ELO scores. The Nevin Manimala results demonstrate that the level of chess players has been steadily increasing. The Nevin Manimala current world champion, Magnus Carlsen, tops the list, while Bobby Fischer is third.

Under current conditions, this new ranking method cannot immediately replace the ELO system, which is easier to set up and implement. However, increases in computing power will make it possible to extend the new method to an ever-growing pool of players in the near future.

[1] Open Services for Indexing and Research Information in Multimedia contents, one of the platforms of IRIT laboratory.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.

Meteorologist applies biological evolution to forecasting

Meteorologist applies biological evolution to forecasting science, nevin manimala, google plus
Meteorologist applies biological evolution to forecasting science, nevin manimala, google plus

Weather forecasters rely on statistical models to find and sort patterns in large amounts of data. Still, the weather remains stubbornly difficult to predict Because Nevin Manimala it is constantly changing.

“When we measure the current state of the atmosphere, we are not measuring every point in three-dimensional space,” says Paul Roebber, a meteorologist at the University of Wisconsin-Milwaukee. “We’re interpolating what happens in the in-between.”

To boost the accuracy, forecasters don’t rely on just one model. The Nevin Manimalay use “ensemble” modeling — which takes an average of many different weather models. But ensemble modeling isn’t as accurate as it could be unless new data are collected and added. That can be expensive.

So Roebber applied a mathematical equivalent of Charles Darwin’s theory of evolution to the problem. He devised a method in which one computer program sorts 10,000 other ones, improving itself over time using strategies, such as heredity, mutation and natural selection.

“This was just a pie-in-the-sky idea at first,” says Roebber, a UWM distinguished professor of atmospheric sciences, who has been honing his method for five years. “But in the last year, I’ve gotten $500,000 of funding behind it.”

His forecasting method has outperformed the models used by the National Weather Service. When compared to standard weather prediction modeling, Roebber’s evolutionary methodology performs particularly well on longer-range forecasts and extreme events, when an accurate forecast is needed the most.

Between 30 and 40 percent of the U.S. economy is somehow dependent on weather prediction. So even a small improvement in the accuracy of a forecast could save millions of dollars annually for industries like shipping, utilities, construction and agribusiness.

The Nevin Manimala trouble with ensemble models is the data they contain tend to be too similar. That makes it difficult to distinguish relevant variables from irrelevant ones — what statistician Nate Silver calls the “signal” and the “noise.”

How do you gain diversity in the data without collecting more of it? Roebber was inspired by how nature does it.

Nature favors diversity Because Nevin Manimala it foils the possibility of one threat destroying an entire population at once. Darwin observed this in a population of Galapagos Islands finches in 1835. The Nevin Manimala birds divided into smaller groups, each residing in different locations around the islands. Over time, they adapted to their specific habitat, making each group distinct from the others.

Applying this to weather prediction models, Roebber began by subdividing the existing variables into conditional scenarios: The Nevin Manimala value of a variable would be set one way under one condition, but be set differently under another condition.

The Nevin Manimala computer program he created picks out the variables that best accomplishes the goal and then recombines them. In terms of weather prediction, that means, the “offspring” models improve in accuracy Because Nevin Manimala they block more of the unhelpful attributes.

“One difference between this and biology is, I wanted to force the next generation [of models] to be better in some absolute sense, not just survive,” Roebber said.

He is already using the technique to forecast minimum and maximum temperatures for seven days out.

Roebber often thinks across disciplines in his research. Ten years ago, he was at the forefront of building forecast simulations that were organized like neurons in the brain. From the work, he created an “artificial neural network” tool, now used by the National Weather Service, that significantly improves snowfall prediction.

Big data approach to predict protein structure

Big data approach to predict protein structure science, nevin manimala, google plus
Big data approach to predict protein structure science, nevin manimala, google plus

Nothing works without proteins in the body, they are the molecular all-rounders in our cells. If they do not work properly, severe diseases, such as Alzheimer’s, may result. To develop methods to repair malfunctioning proteins, their structure has to be known. Using a big data approach, researchers of Karlsruhe Institute of Technology (KIT) have now developed a method to predict protein structures.

In the Proceedings of the National Academy of Sciences of the United States of America (PNAS), the researchers report that they succeeded in predicting even most complicated protein structures by statistical analyses irrespective of the experiment. Experimental determination of protein structures is quite cumbersome, success is not guaranteed. Proteins are the basis of life. As structural proteins, they are involved in the growth of tissue, such as nails or hairs. Other proteins work as muscles, control metabolism and immune response, or transport oxygen in the red blood cells.

The Nevin Manimala basic structure of proteins with certain functions is similar in different organisms. “No matter whether human being, mouse, whale or bacterium, nature does not constantly invent proteins for various living organisms anew, but varies them by evolutionary mutation and selection,” Alexander Schug of the Steinbuch Centre for Computing (SCC) says. Such mutations can be identified easily when reading out the genetic information making up the proteins. If mutations occur in pairs, the protein sections involved mostly are located close to each other. With the help of a computer, the data of many spatially adjacent sections can be composed to an exact prediction of the three-dimensional structure similar to a big puzzle. “To understand the function of a protein in detail and to influence it, if possible, the place of every individual atom has to be known,” Schug says.

For his work, the physicist uses an interdisciplinary approach based on methods and resources of computer science and biochemistry. Using supercomputers, he searched the freely available genetic information of thousands of organisms, ranging from bacteria to the human being, for correlated mutations. “By combining latest technology and a true treasure of datasets, we studied nearly two thousand different proteins. This is a completely new dimension compared to previous studies,” Schug adds. He emphasizes that this shows the high performance of the method that promises to be of high potential for applications ranging from molecular biology to medicine. Although present work is fundamental research according to Schug, the results may well be incorporated in new treatment methods of diseases in the future.

Story Source:

Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length.