Freezing upon heating: Formation of dynamical glass

Freezing upon heating: Formation of dynamical glass statistics, science, nevin manimala
Freezing upon heating: Formation of dynamical glass statistics, science, nevin manimala

The discovery of superconductivity and its experimental realization have surely been two of the most important advancements in physics and engineering of the past century. Nevertheless, their statistical and dynamical characteristics have yet to be fully revealed and understood. A team of researchers at the Center for Theoretical Physics of Complex Systems, within the Institute for Basic Science (IBS, South Korea), have modeled the energy behavior of chaotic networks of superconducting elements (grains), separated by non-superconducting junctions, and found out some unexpected statistical properties at long (but still finite) time-scales. Their findings are published in Physical Review Letters.

A number of pioneer discoveries in statistical mechanics arose from questioning the applicability of core abstract concepts to physical systems and experimental devices. A notable example is the ergodic hypothesis, which assumes that over time a system visits almost every available microstate of the phase space, and that the infinite time average of any measurable quantity of the system matches with its phase space average. In short, this is the reason why ice melts in a pot of water. And it will do so faster, if the water is hotter. Scientists have been figuring ways to verify the validity or the failure of the ergodic hypothesis based on finite-time measurements.

Led by Sergej Flach, IBS researchers have developed an efficient method to extract precise estimates of the time-scales for ergodicity (coined ergodization time). This method has herewith been successfully applied to classical networks of superconducting grains weakly coupled by Josephson junctions.

The team found that in these networks the ergodization time-scale quickly becomes huge — although staying finite — upon increasing the system temperature. Instead, the time-scales necessary for chaoticity to develop remain practically unchanged with respect to the ergodization one. This is highly surprising, as ergodicity is inextricably knotted to chaos, and one may expect that their respective time-scales must be also strictly related. In terms of the ice, it means that the hotter the water gets, the longer it takes for the ice cubes to melt. IBS researchers numerically showed that higher temperature fluctuations strongly hinder their own meandering through the system. Thus, a slower and slower process drastically delays the ergodization of the system. The team has labeled this discovery: dynamical glass.

“Upon increasing the temperature, our studies unravelled the emergence of roaming chaotic spots among frozen and seemingly inert regions. The name dynamical glass follows from this very fragmentation, as the word dynamical hints the quick development of chaos, while the word glass points at phenomena that require an extremely long — but finite — time-scale to occur,” explains Carlo Danieli, a member of the team.

“The understanding of the mechanism and the necessary time scales for ergodicity and chaoticity to develop is at the very core of a huge number of recent advancement in condensed matter physics. We expect this to pave the way to assess several unsolved issues in many body systems, from anomalous heat conductivity to thermalization,” enthuses the team.

IBS researchers expect that the observed dynamical glass is a generic property of networks of superconducting grains via Josephson coupling irrespective of their space dimensionality. Furthermore, it is conjectured that a broad set of weakly non-integrable many-body systems turn into dynamical glasses as they approach specific temperature regimes. An equally charming and challenging task is the team’s aspiration to demonstrate the existence of a dynamical glass in quantum many-body systems, and establish its connection with many-body localization phenomena.

As stated by Flach “We expect these findings open a new venue to assess and understand phenomena related to many-body localization and glassiness in a large number of weakly non-integrable many-body systems.”

Story Source:

Materials provided by Institute for Basic Science. Note: Content may be edited for style and length.

‘Statistics anxiety’ is real, and new research suggests targeted ways to handle it

'Statistics anxiety' is real, and new research suggests targeted ways to handle it statistics, science, nevin manimala
'Statistics anxiety' is real, and new research suggests targeted ways to handle it statistics, science, nevin manimala

Have you ever been stressed out by the idea of doing math or statistics problems? You’re not alone.

Research shows that up to 80 percent of college students experience some form of statistics anxiety — and for students majoring in psychology, this anxiety often puts obstacles in their path to graduation.

“In my psychology statistics class, I once had a student take and fail the class two or three times,” said Michael Vitevitch, professor and chair of psychology at the University of Kansas. “He’d taken everything in the major, and this was the last class he needed. But he had high statistics anxiety. On one of the exams, he kind of froze and was staring at the paper. I took him into the hallway and said, ‘Relax, go splash some water on your face and come back when you’re OK.’ He did, but at the end of class, he was still staring at the paper. I asked him to come back to my office and finish it. That extra time was enough to get him through the test. Eventually, he passed the class and graduated that semester. It took him seven or eight years to complete his bachelor’s degree, and it was because of his problems in the statistics class. He went on to do occupational therapy and is really doing quite well, but it was a statistics class that was almost the barrier to having that future.”

Now, Vitevitch is the co-author of a new study that uses a questionnaire and an analytical technique called “network science” to determine precisely what factors contribute to this kind of statistics anxiety among psychology majors. The paper appears in the peer-reviewed journal Scholarship of Teaching and Learning in Psychology.

“We teach a statistics class in the psychology department and see many students put it off until senior year because they’re scared of this class,” Vitevitch said. “We’re interested in seeing if we could help students out of the statistics anxiety. There’s not a one-size-fits-all solution to get them to overcome their fears. You need to find out what their fear is and focus on that. For people who don’t think statistics are useful, you need to convince them it’s not just useful for psychology but for other things as well. For people fearful of math and statistics in general, you need to help lower their anxiety so they can focus on learning. We hope this gives us some understanding of our own students and statistical anxiety in general.”

Vitevitch’s collaborators on the new paper are Cynthia Siew of the University of Warwick and Marsha McCartney of the National University of Singapore. The researchers said a grasp of statistics is vital to academic achievement and a well-rounded understanding of the field of psychology.

“It’s a way of communicating with numbers instead of words,” Vitevitch said. “A picture is worth a thousand words. Numbers can convey a lot of information as well. Being able to compute those numbers and communicate that massive amount of information quickly and concisely is important. It’s also very important knowing what those numbers mean and not skipping over them in a paper in a peer-reviewed journal article.”

The team used a questionnaire called the Statistical Anxiety Rating Scale (STARS) to determine aspects of learning statistics causing the most anxiety and categorize students into groups of high- and low-anxiety students. Questions probed students’ feelings about the value of statistics, self-concepts about math ability, fear of statistics teachers, interpretation anxiety, test and class anxiety, and the fear of asking for help.

“People would answer and rate questions like, ‘Do you think this is a useless topic? Do you think you’ll never use statistics in your life? Do you think statistics professors aren’t human because they’re more like robots?'” Vitevitch said. “Hopefully something like ‘my statistics teacher isn’t human’ is something we can focus on — and I’m saying that as a former statistics teacher who is human.”

With the questionnaire results from 228 students, the KU researcher and his colleagues mapped results visually using an emerging analysis technique called network science, which puts the most important contributors or symptoms of statistical anxiety at the center of a visual diagram of connecting nodes.

“Network science maps a collection of entities that are somehow related to another,” Vitevitch said. “Most people think of a social network where the dots would be you and your friends and lines would be drawn between you and people you know. You might know someone and they might know someone, but you might not know that third person. If you sketch these friends out, you get this spider-web looking thing. People have been doing this with various psychopathologies — looking at symptoms of depression, for example. With statistics anxiety, it’s not just that you have symptoms, it’s how long you have them and which ones are more important? That’s not always captured by a laundry list of symptoms. But it does seem to be captured by a network approach. The most important symptoms are in the middle of that spider web.”

Whereas previous studies on the topic used a scale to measure the levels of students’ statistics anxiety, the application of network science to responses from the STARS questionnaire increased the researchers’ understanding of the nature of the anxiety itself. For instance, network science analysis revealed high- and low-anxiety networks have different network structures.

For students high in statistics anxiety, the chief symptoms included high agreement with the statements “I can’t even understand seventh- and eighth-grade math; how can I possibly do statistics?” and “statistics teachers are so abstract they seem inhuman.” For low-anxiety students, main symptoms included fear of “asking a fellow student for help in understanding a printout” and anxiety “interpreting the meaning of a table in a journal article.”

Vitevitch said he hoped the results would be used by instructors in university psychology departments to develop effective interventions to ease students’ statistics anxiety.

“This paper is targeted at people who teach psychology,” he said. “Hopefully we’ll turn our science on ourselves and find a better way to make sure we’re getting our points across to our students and helping those who need a little help. It might not be a matter of teaching remedial math, but more like helping them overcome fears or discomfort they have with this topic.”

New mathematical model can help save endangered species

New mathematical model can help save endangered species statistics, science, nevin manimala
New mathematical model can help save endangered species statistics, science, nevin manimala

What does the blue whale have in common with the Bengal tiger and the green turtle? They share the risk of extinction and are classified as endangered species. There are multiple reasons for species to die out, and climate changes is among the main reasons.

The risk of extinction varies from species to species depending on how individuals in its populations reproduce and how long each animal survives. Understanding the dynamics of survival and reproduction can support management actions to improve a specie’s chances of surviving.

Mathematical and statistical models have become powerful tools to help explain these dynamics. However, the quality of the information we use to construct such models is crucial to improve our chances of accurately predicting the fate of populations in nature.

“A model that over-simplifies survival and reproduction can give the illusion that a population is thriving when in reality it will go extinct.,” says associate professor Fernando Colchero, author of new paper published in Ecology Letters.

Colchero’s research focuses on mathematically recreating the population dynamics by better understanding the species’s demography. He works on constructing and exploring stochastic population models that predict how a certain population (for example an endangered species) will change over time.

These models include mathematical factors to describe how the species’ environment, survival rates and reproduction determine to the population’s size and growth. For practical reasons some assumptions are necessary.

Two commonly accepted assumptions are that survival and reproduction are constant with age, and that high survival in the species goes hand in hand with reproduction across all age groups within a species. Colchero challenged these assumptions by accounting for age-specific survival and reproduction, and for trade-offs between survival and reproduction. This is, that sometimes conditions that favor survival will be unfavorable for reproduction, and vice versa.

For his work Colchero used statistics, mathematical derivations, and computer simulations with data from wild populations of 24 species of vertebrates. The outcome was a significantly improved model that had more accurate predictions for a species’ population growth.

Despite the technical nature of Fernando’s work, this type of model can have very practical implications as they provide qualified explanations for the underlying reasons for the extinction. This can be used to take management actions and may help prevent extinction of endangered species.

Story Source:

Materials provided by University of Southern Denmark. Note: Content may be edited for style and length.

New building block in quantum computing demonstrated

New building block in quantum computing demonstrated statistics, science, nevin manimala
New building block in quantum computing demonstrated statistics, science, nevin manimala

Researchers with the Department of Energy’s Oak Ridge National Laboratory have demonstrated a new level of control over photons encoded with quantum information. Their research was published in Optica.

Joseph Lukens, Brian Williams, Nicholas Peters, and Pavel Lougovski, research scientists with ORNL’s Quantum Information Science Group, performed distinct, independent operations simultaneously on two qubits encoded on photons of different frequencies, a key capability in linear optical quantum computing. Qubits are the smallest unit of quantum information.

Quantum scientists working with frequency-encoded qubits have been able to perform a single operation on two qubits in parallel, but that falls short for quantum computing.

“To realize universal quantum computing, you need to be able to do different operations on different qubits at the same time, and that’s what we’ve done here,” Lougovski said.

According to Lougovski, the team’s experimental system — two entangled photons contained in a single strand of fiber-optic cable — is the “smallest quantum computer you can imagine. This paper marks the first demonstration of our frequency-based approach to universal quantum computing.”

“A lot of researchers are talking about quantum information processing with photons, and even using frequency,” said Lukens. “But no one had thought about sending multiple photons through the same fiber-optic strand, in the same space, and operating on them differently.”

The team’s quantum frequency processor allowed them to manipulate the frequency of photons to bring about superposition, a state that enables quantum operations and computing.

Unlike data bits encoded for classical computing, superposed qubits encoded in a photon’s frequency have a value of 0 and 1, rather than 0 or 1. This capability allows quantum computers to concurrently perform operations on larger datasets than today’s supercomputers.

Using their processor, the researchers demonstrated 97 percent interference visibility — a measure of how alike two photons are — compared with the 70 percent visibility rate returned in similar research. Their result indicated that the photons’ quantum states were virtually identical.

The researchers also applied a statistical method associated with machine learning to prove that the operations were done with very high fidelity and in a completely controlled fashion.

“We were able to extract more information about the quantum state of our experimental system using Bayesian inference than if we had used more common statistical methods,” Williams said.

“This work represents the first time our team’s process has returned an actual quantum outcome.”

Williams pointed out that their experimental setup provides stability and control. “When the photons are taking different paths in the equipment, they experience different phase changes, and that leads to instability,” he said. “When they are traveling through the same device, in this case, the fiber-optic strand, you have better control.”

Stability and control enable quantum operations that preserve information, reduce information processing time, and improve energy efficiency. The researchers compared their ongoing projects, begun in 2016, to building blocks that will link together to make large-scale quantum computing possible.

“There are steps you have to take before you take the next, more complicated step,” Peters said. “Our previous projects focused on developing fundamental capabilities and enable us to now work in the fully quantum domain with fully quantum input states.”

Lukens said the team’s results show that “we can control qubits’ quantum states, change their correlations, and modify them using standard telecommunications technology in ways that are applicable to advancing quantum computing.”

Once the building blocks of quantum computers are all in place, he added, “we can start connecting quantum devices to build the quantum internet, which is the next, exciting step.”

Much the way that information is processed differently from supercomputer to supercomputer, reflecting different developers and workflow priorities, quantum devices will function using different frequencies. This will make it challenging to connect them so they can work together the way today’s computers interact on the internet.

This work is an extension of the team’s previous demonstrations of quantum information processing capabilities on standard telecommunications technology. Furthermore, they said, leveraging existing fiber-optic network infrastructure for quantum computing is practical: billions of dollars have been invested, and quantum information processing represents a novel use.

The researchers said this “full circle” aspect of their work is highly satisfying. “We started our research together wanting to explore the use of standard telecommunications technology for quantum information processing, and we have found out that we can go back to the classical domain and improve it,” Lukens said.

Lukens, Williams, Peters, and Lougovski collaborated with Purdue University graduate student Hsuan-Hao Lu and his advisor Andrew Weiner. The research is supported by ORNL’s Laboratory Directed Research and Development program.

Artificial intelligence for studying the ancient human populations of Patagonia

Artificial intelligence for studying the ancient human populations of Patagonia statistics, science, nevin manimala
Artificial intelligence for studying the ancient human populations of Patagonia statistics, science, nevin manimala

Argentine and Spanish researchers have used statistical techniques of automatic learning to analyze mobility patterns and technology of the hunter-gatherer groups that inhabited the Southern Cone of America, from the time they arrived about 12,000 years ago until the end of the 19th century. Big data from archaeological sites located in the extreme south of Patagonia have been used for this study.

The presence of humans on the American continent dates back to at least 14,500 years ago, according to datings made at archaeological sites such as Monte Verde, in Chile’s Los Lagos Region. But the first settlers continued moving towards the southernmost confines of America.

Now, researchers from Argentina’s National Council for Scientific and Technical Research (CONICET) and two Spanish institutions (the Spanish National Research Council and the University of Burgos) have analyzed the relationships between mobility and technology developed by those societies that originated in the far south of Patagonia.

The study, published in the Royal Society Open Science journal, is based on an extensive database of all available archaeological evidence of human presence in this region, from the time the first groups arrived in the early Holocene (12,000 years ago) until the end of the 19th century.

This was followed by the application of machine learning techniques, a statistical system that allows the computer to learn from many data (in this case, big data from characteristic technological elements of the sites) in order to carry out classifications and predictions.

“It is by means of automatic classification algorithms that we have identified two technological packages or ‘landscapes’: one that characterizes pedestrian hunter-gatherer groups (with their own stone and bone tools) and the other characterizing those that had nautical technology, such as canoes, harpoons and mollusc shells used to make beads,” explains Ivan Briz i Godino, an archaeologist of the National Council for Scientific and Technical Research (CONICET) of Argentina and co-author of the work.

“In future excavations, when sets of technological elements such as those we have detected appear, we’ll be able to directly deduce the type of mobility of the group or the connections with other communities,” adds Briz.

The results of the study have also made it possible to obtain maps with the settlements of the two communities, and this, in turn, has made it possible to locate large regions in which they interacted and shared their technological knowledge. In the case of groups with nautical technology, it has been confirmed that they arrived at around the beginning of the Mid-Holocene (some 6,000 years ago) from the channels and islands of the South Pacific, moving along the coast of what is now Chile.

“Traditional archaeology identifies sites, societies and their possible contacts on the basis of specific elements selected by specialists (such as designs of weapon tips or decorative elements), but here we show that it is more interesting to analyse sets of technological elements as a whole, using artificial intelligence techniques that allow us to work with large data volumes and without subjective prejudices,” concludes Briz.

New tool helps scientists better target the search for alien life

New tool helps scientists better target the search for alien life statistics, science, nevin manimala
New tool helps scientists better target the search for alien life statistics, science, nevin manimala

Could there be another planet out there with a society at the same stage of technological advancement as ours? To help find out, EPFL scientist Claudio Grimaldi, working in association with the University of California, Berkeley, has developed a statistical model that gives researchers a new tool in the search for the kind of signals that an extraterrestrial society might emit. His method — described in an article appearing today in Proceedings of the National Academy of Sciences — could also make the search cheaper and more efficient.

Atrophysics initially wasn’t Grimaldi’s thing; he was interested more in the physics of condensed matter. Working at EPFL’s Laboratory of Physics of Complex Matter, his research involved calculating the probabilities of carbon nanotubes exchanging electrons. But then he wondered: if the nanotubes were stars and the electrons were signals generated by extraterrestrial societies, could we calculate the probability of detecting those signals more accurately?

This is not pie-in-the-sky research — scientists have been studying this possibility for nearly 60 years. Several research projects concerning the search for extraterrestrial intelligence (SETI) have been launched since the late 1950s, mainly in the United States. The idea is that an advanced civilization on another planet could be generating electromagnetic signals, and scientists on Earth might be able to pick up those signals using the latest high-performance radio telescopes.

Renewed interest

Despite considerable advances in radio astronomy and the increase in computing power since then, none of those projects has led to anything concrete. Some signals without identifiable origin have well been recorded, like the Wow! signal in 1977, but none of them has been repeated or seems credible enough to be attributable to alien life.

But that doesn’t mean scientists have given up. On the contrary, SETI has seen renewed interest following the discovery of the many exoplanets orbiting the billions of suns in our galaxy. Researchers have designed sophisticated new instruments — like the Square Kilometre Array, a giant radio telescope being built in South Africa and Australia with a total collecting area of one square kilometer — that could pave the way to promising breakthroughs. And Russian entrepreneur Yuri Milner recently announced an ambitious program called Breakthrough Listen, which aims to cover 10 times more sky than previous searches and scan a much wider band of frequencies. Milner intends to fund his initiative with 100 million dollars over 10 years.

“In reality, expanding the search to these magnitudes only increases our chances of finding something by very little. And if we still don’t detect any signals, we can’t necessarily conclude with much more certainty that there is no life out there,” says Grimaldi.

Still a ways to go

The advantage of Grimaldi’s statistical model is that it lets scientists interpret both the success and failure to detect signals at varying distances from Earth. His model employs Bayes’ theorem to calculate the remaining probability of detecting a signal within a given radius around our planet.

For example, even if no signal is detected within a radius of 1,000 light years, there is still an over 10% chance that Earth is within range of hundreds of similar signals from elsewhere in the galaxy, but that our radio telescopes are currently not powerful enough to detect them. However, that probability rises to nearly 100% if even just one signal is detected within the 1,000-light-year radius. In that case, we could be almost certain that our galaxy is full of alien life.

After factoring in other parameters like the size of the galaxy and how closely packed its stars are, Grimaldi estimates that the probability of detecting a signal becomes very slight only at a radius of 40,000 light years. In other words, if no signals are detected at this distance from Earth, we could reasonably conclude that no other civilization at the same level of technological development as ours is detectable in the galaxy. But so far, scientists have been able to search for signals within a radius of “just” 40 light years.

So there’s still a ways to go. Especially since these search methods can’t detect alien civilizations that may be in primordial stages or that are highly advanced but haven’t followed the same technological trajectory as ours.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Sarah Perrin. Note: Content may be edited for style and length.

Understanding deep-sea images with artificial intelligence

Understanding deep-sea images with artificial intelligence statistics, science, nevin manimala
Understanding deep-sea images with artificial intelligence statistics, science, nevin manimala

More and more data and images are generated during ocean research. In order to be able to evaluate the image data scientifically, automated procedures are necessary. Researchers have now developed a standardized workflow for sustainable marine image analysis for the first time.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. Diving robots or autonomous underwater vehicles, which carry out measurements independently in the deep sea, can now record large quantities of high-resolution images. To evaluate these images scientifically in a sustainable manner, a number of prerequisites have to be fulfilled in data acquisition, curation and data management. “Over the past three years, we have developed a standardized workflow that makes it possible to scientifically evaluate large amounts of image data systematically and sustainably,” explains Dr. Timm Schoening from the “Deep Sea Monitoring” working group headed by Prof. Dr. Jens Greinert at GEOMAR. The background to this was the project JPIOceans “Mining Impact.” The ABYSS autonomous underwater vehicle was equipped with a new digital camera system to study the ecosystem around manganese nodules in the Pacific Ocean. With the data collected in this way, the workflow was designed and tested for the first time. The results have now been published in the international journal Scientific Data.

The procedure is divided into three steps: Data acquisition, data curation and data management, in each of which defined intermediate steps should be completed. For example, it is important to specify how the camera is to be set up, which data is to be captured, or which lighting is useful in order to be able to answer a specific scientific question. In particular, the meta data of the diving robot must also be recorded. “For data processing, it is essential to link the camera’s image data with the diving robot’s metadata,” says Schoening. The AUV ABYSS, for example, automatically recorded its position, the depth of the dive and the properties of the surrounding water. “All this information has to be linked to the respective image because it provides important information for subsequent evaluation,” says Schoening. An enormous task: ABYSS collected over 500,000 images of the seafloor in around 30 dives. Various programs, which the team developed especially for this purpose, ensured that the data was brought together. Here, unusable image material, such as those with motion blur, was removed.

All these processes are now automated. “Until then, however, a large number of time-consuming steps had been necessary,” says Schoening. “Now the method can be transferred to any project, even with other AUVs or camera systems.” The material processed in this way was then made permanently available for the general public.

Finally, artificial intelligence in the form of the specially developed algorithm “CoMoNoD” was used for evaluation at GEOMAR. It automatically records whether manganese nodules are present in a photo, in what size and at what position. Subsequently, for example, the individual images could be combined to form larger maps of the seafloor. The next use of the workflow and the newly developed programs is already planned: At the next expedition in spring next year in the direction of manganese nodules, the evaluation of the image material will take place directly on board. “Therefore we will take some particularly powerful computers with us on board,” says Timm Schoening.

Story Source:

Materials provided by Helmholtz Centre for Ocean Research Kiel (GEOMAR). Note: Content may be edited for style and length.

A milestone for forecasting earthquake hazards

A milestone for forecasting earthquake hazards science, nevin_manimala, google plus
A milestone for forecasting earthquake hazards science, nevin_manimala, google plus

Earthquakes pose a profound danger to people and cities worldwide, but with the right hazard-mitigation efforts, from stricter building requirements to careful zoning, the potential for catastrophic collapses of roads and buildings and loss of human lives can be limited.

All of these measures depend on science delivering high-quality seismic hazard models. And yet, current models depend on a list of uncertain assumptions, with predictions that are difficult to test in the real world due to the long intervals between big earthquakes.

Now, a team of researchers from Columbia University’s Lamont-Doherty Earth Observatory, University of Southern California, University of California at Riverside and the U.S. Geological Survey has come up with a physics-based model that marks a turning point in earthquake forecasting. The Nevin Manimalair results appear in the new issue of Science Advances.

“Whether a big earthquake happens next week or 10 years from now, engineers need to build for the long run,” says the study’s lead author, Bruce Shaw, a geophysicist at Lamont-Doherty. “We now have a physical model that tells us what the long-term hazards are.”

Simulating nearly 500,000 years of California earthquakes on a supercomputer, researchers were able to match hazard estimates from the state’s leading statistical model based on a hundred years of instrumental data. The Nevin Manimala mutually validating results add support for California’s current hazard projections, which help to set insurance rates and building design standards across the state. The Nevin Manimala results also suggest a growing role for physics-based models in forecasting earthquake hazard and evaluating competing models in California and other earthquake prone regions.

The Nevin Manimala earthquake simulator used in the study, RSQSim, simplifies California’s statistical model by eliminating many of the assumptions that go into estimating the likelihood of an earthquake of a certain size hitting a specific region. The Nevin Manimala researchers, in fact, were surprised when the simulator, programmed with relatively basic physics, was able to reproduce estimates from a model that has improved steadily for decades. “This shows our simulator is ready for prime time,” says Shaw.

Seismologists can now use RSQSim to test the statistical model’s region-specific predictions. Accurate hazard estimates are especially important to government regulators in high-risk cities like Los Angeles and San Francisco, who write and revise building codes based on the latest science. In a state with a severe housing shortage, regulators are under pressure to make buildings strong enough to withstand heavy shaking while keeping construction costs down. A second tool to confirm hazard estimates gives the numbers added credibility.

“If you can get similar results with different techniques, that builds confidence you’re doing something right,” says study coauthor Tom Jordan, a geophysicist at USC.

A hallmark of the simulator is its use of rate and state-dependent friction to approximate how real-world faults break and transfer stress to other faults, sometimes setting off even bigger quakes. Developed at UC Riverside more than a decade ago, and refined further in the current study, RSQSim is the first physics-based model to replicate California’s most recent rupture forecast, UCERF3. When results from both models were fed into California’s statistical ground-shaking model, they came up with similar hazard profiles.

John Vidale, director of the Southern California Earthquake Center, which helped fund the study, says the new model has created a realistic 500,000-year history of earthquakes along California’s faults for researchers to explore. Vidale predicted the model would improve as computing power grows and more physics are added to the software. “Details such as earthquakes in unexpected places, the evolution of earthquake faults over geological time, and the viscous flow deep under the tectonic plates are not yet built in,” he said.

The Nevin Manimala researchers plan to use the model to learn more about aftershocks, and how they unfold on California’s faults, and to study other fault systems globally. The Nevin Manimalay are also working on incorporating the simulator into a physics-based ground-motion model, called CyberShake, to see if it can reproduce shaking estimates from the current statistical model.

“As we improve the physics in our simulations and computers become more powerful, we will better understand where and when the really destructive earthquakes are likely to strike,” says study coauthor Kevin Milner, a researcher at USC.

Story Source:

Materials provided by Columbia University. Original written by Kim Martineau. Note: Content may be edited for style and length.

Evaluation method for the impact of wind power fluctuation on power system quality

Evaluation method for the impact of wind power fluctuation on power system quality science, nevin_manimala, google plus
Evaluation method for the impact of wind power fluctuation on power system quality science, nevin_manimala, google plus

Abrupt changes of wind power generation output are a source of severe damage to power systems. Researchers at Kyoto University developed a stochastic modeling method that enables to evaluate the impact of such phenomena. The Nevin Manimala feature of the method lies in its significant computational effectiveness in comparison to standard Monte Carlo simulation, and its applicability to analysis and synthesis of various systems subject to extreme outliers.

Introduction of wind power generation into the electric power system is proceeding actively, mainly in the United States and Europe, and is expected to continue in Japan. However, upon the implementation, it is crucial to deal with prediction uncertainty of output fluctuation. The Nevin Manimala fluctuation of wind power generation is usually small, but it becomes extremely large due to the occurrence of gusts and turbulence at a non-negligible frequency. Such extreme outliers have been regarded as a source of severe damage to power systems.

To cope with such a fluctuation of wind power generation, the goal setting such as “absolutely keep the frequency fluctuation within 0.2 Hz” would be unattainable or would result in an overly conservative design. The Nevin Manimalarefore, the probabilistic goal setting such as “keep the frequency fluctuation within 0.2 Hz with 99.7% or more” is indispensable.

Probabilistic uncertainty is evaluated statistically, commonly by assuming that it obeys normal distribution for its mathematical processability. The Nevin Manimala output outliers in wind power generation are, however, more frequent than represented by normal distribution. Even if a complicated simulator can be constructed without assuming normal distribution, it is not realistic to investigate the statistical property by Monte Carlo simulation. This is Because Nevin Manimala the required number of samples explodes before sufficiently many extreme outliers occur.

An evaluation method was developed for the impact of wind power fluctuation on power system quality. The Nevin Manimala method first builds probabilistic models assuming the stable distribution (an extension of the normal distribution) on the uncertainty. The Nevin Manimalan, instead of using the model as a simulator to generate data samples, we compute the statistical properties directly from parameters in the model. The Nevin Manimala important feature is 1. the influence of extreme outliers can be properly considered, 2. model can be determined easily from actual data, and 3. computation cost is very low. The Nevin Manimala method was proved to be valid through its application to frequency deviation estimation based on actual power system data.

This newly proposed probabilistic evaluation method enables us to quantitatively evaluate the power system risk caused by the occurrence of extremally abrupt changes of wind power generation. Countermeasures based on the evaluation would contribute to improvement of the reliability and economic efficiency of the electric power system. It should be also noted that the proposed method is applicable to analysis and synthesis of various systems which have extreme outliers.

Story Source:

Materials provided by Japan Science and Technology Agency. Note: Content may be edited for style and length.

Everything big data claims to know about you could be wrong

Everything big data claims to know about you could be wrong science, nevin_manimala, google plus
Everything big data claims to know about you could be wrong science, nevin_manimala, google plus

When it comes to understanding what makes people tick — and get sick — medical science has long assumed that the bigger the sample of human subjects, the better. But new research led by the University of California, Berkeley, suggests this big-data approach may be wildly off the mark.

That’s largely Because Nevin Manimala emotions, behavior and physiology vary markedly from one person to the next and one moment to the next. So averaging out data collected from a large group of human subjects at a given instant offers only a snapshot, and a fuzzy one at that, researchers said.

The Nevin Manimala findings, published this week in the Proceedings of the National Academy of Sciences journal, have implications for everything from mining social media data to customizing health therapies, and could change the way researchers and clinicians analyze, diagnose and treat mental and physical disorders.

“If you want to know what individuals feel or how they become sick, you have to conduct research on individuals, not on groups,” said study lead author Aaron Fisher, an assistant professor of psychology at UC Berkeley. “Diseases, mental disorders, emotions, and behaviors are expressed within individual people, over time. A snapshot of many people at one moment in time can’t capture these phenomena.”

Moreover, the consequences of continuing to rely on group data in the medical, social and behavioral sciences include misdiagnoses, prescribing the wrong treatments and generally perpetuating scientific theory and experimentation that is not properly calibrated to the differences between individuals, Fisher said.

That said, a fix is within reach: “People shouldn’t necessarily lose faith in medical or social science,” he said. “Instead, they should see the potential to conduct scientific studies as a part of routine care. This is how we can truly personalize medicine.”

Plus, he noted, “modern technologies allow us to collect many observations per person relatively easily, and modern computing makes the analysis of these data possible in ways that were not possible in the past.”

Fisher and fellow researchers at Drexel University in Philadelphia and the University of Groningen in the Netherlands used statistical models to compare data collected on hundreds of people, including healthy individuals and those with disorders ranging from depression and anxiety to post-traumatic stress disorder and panic disorder.

In six separate studies they analyzed data via online and smartphone self-report surveys, as well as electrocardiogram tests to measure heart rates. The Nevin Manimala results consistently showed that what’s true for the group is not necessarily true for the individual.

For example, a group analysis of people with depression found that they worry a great deal. But when the same analysis was applied to each individual in that group, researchers discovered wide variations that ranged from zero worrying to agonizing well above the group average.

Moreover, in looking at the correlation between fear and avoidance — a common association in group research — they found that for many individuals, fear did not cause them to avoid certain activities, or vice versa.

“Fisher’s findings clearly imply that capturing a person’s own processes as they fluctuate over time may get us far closer to individualized treatment,” said UC Berkeley psychologist Stephen Hinshaw, an expert in psychopathology and faculty member of the department’s clinical science program.

In addition to Fisher, co-authors of the study are John Medaglia at Drexel University and Bertus Jeronimus at the University of Groningen.