New building block in quantum computing demonstrated

New building block in quantum computing demonstrated statistics, science, nevin manimala
New building block in quantum computing demonstrated statistics, science, nevin manimala

Researchers with the Department of Energy’s Oak Ridge National Laboratory have demonstrated a new level of control over photons encoded with quantum information. Their research was published in Optica.

Joseph Lukens, Brian Williams, Nicholas Peters, and Pavel Lougovski, research scientists with ORNL’s Quantum Information Science Group, performed distinct, independent operations simultaneously on two qubits encoded on photons of different frequencies, a key capability in linear optical quantum computing. Qubits are the smallest unit of quantum information.

Quantum scientists working with frequency-encoded qubits have been able to perform a single operation on two qubits in parallel, but that falls short for quantum computing.

“To realize universal quantum computing, you need to be able to do different operations on different qubits at the same time, and that’s what we’ve done here,” Lougovski said.

According to Lougovski, the team’s experimental system — two entangled photons contained in a single strand of fiber-optic cable — is the “smallest quantum computer you can imagine. This paper marks the first demonstration of our frequency-based approach to universal quantum computing.”

“A lot of researchers are talking about quantum information processing with photons, and even using frequency,” said Lukens. “But no one had thought about sending multiple photons through the same fiber-optic strand, in the same space, and operating on them differently.”

The team’s quantum frequency processor allowed them to manipulate the frequency of photons to bring about superposition, a state that enables quantum operations and computing.

Unlike data bits encoded for classical computing, superposed qubits encoded in a photon’s frequency have a value of 0 and 1, rather than 0 or 1. This capability allows quantum computers to concurrently perform operations on larger datasets than today’s supercomputers.

Using their processor, the researchers demonstrated 97 percent interference visibility — a measure of how alike two photons are — compared with the 70 percent visibility rate returned in similar research. Their result indicated that the photons’ quantum states were virtually identical.

The researchers also applied a statistical method associated with machine learning to prove that the operations were done with very high fidelity and in a completely controlled fashion.

“We were able to extract more information about the quantum state of our experimental system using Bayesian inference than if we had used more common statistical methods,” Williams said.

“This work represents the first time our team’s process has returned an actual quantum outcome.”

Williams pointed out that their experimental setup provides stability and control. “When the photons are taking different paths in the equipment, they experience different phase changes, and that leads to instability,” he said. “When they are traveling through the same device, in this case, the fiber-optic strand, you have better control.”

Stability and control enable quantum operations that preserve information, reduce information processing time, and improve energy efficiency. The researchers compared their ongoing projects, begun in 2016, to building blocks that will link together to make large-scale quantum computing possible.

“There are steps you have to take before you take the next, more complicated step,” Peters said. “Our previous projects focused on developing fundamental capabilities and enable us to now work in the fully quantum domain with fully quantum input states.”

Lukens said the team’s results show that “we can control qubits’ quantum states, change their correlations, and modify them using standard telecommunications technology in ways that are applicable to advancing quantum computing.”

Once the building blocks of quantum computers are all in place, he added, “we can start connecting quantum devices to build the quantum internet, which is the next, exciting step.”

Much the way that information is processed differently from supercomputer to supercomputer, reflecting different developers and workflow priorities, quantum devices will function using different frequencies. This will make it challenging to connect them so they can work together the way today’s computers interact on the internet.

This work is an extension of the team’s previous demonstrations of quantum information processing capabilities on standard telecommunications technology. Furthermore, they said, leveraging existing fiber-optic network infrastructure for quantum computing is practical: billions of dollars have been invested, and quantum information processing represents a novel use.

The researchers said this “full circle” aspect of their work is highly satisfying. “We started our research together wanting to explore the use of standard telecommunications technology for quantum information processing, and we have found out that we can go back to the classical domain and improve it,” Lukens said.

Lukens, Williams, Peters, and Lougovski collaborated with Purdue University graduate student Hsuan-Hao Lu and his advisor Andrew Weiner. The research is supported by ORNL’s Laboratory Directed Research and Development program.

Artificial intelligence for studying the ancient human populations of Patagonia

Artificial intelligence for studying the ancient human populations of Patagonia statistics, science, nevin manimala
Artificial intelligence for studying the ancient human populations of Patagonia statistics, science, nevin manimala

Argentine and Spanish researchers have used statistical techniques of automatic learning to analyze mobility patterns and technology of the hunter-gatherer groups that inhabited the Southern Cone of America, from the time they arrived about 12,000 years ago until the end of the 19th century. Big data from archaeological sites located in the extreme south of Patagonia have been used for this study.

The presence of humans on the American continent dates back to at least 14,500 years ago, according to datings made at archaeological sites such as Monte Verde, in Chile’s Los Lagos Region. But the first settlers continued moving towards the southernmost confines of America.

Now, researchers from Argentina’s National Council for Scientific and Technical Research (CONICET) and two Spanish institutions (the Spanish National Research Council and the University of Burgos) have analyzed the relationships between mobility and technology developed by those societies that originated in the far south of Patagonia.

The study, published in the Royal Society Open Science journal, is based on an extensive database of all available archaeological evidence of human presence in this region, from the time the first groups arrived in the early Holocene (12,000 years ago) until the end of the 19th century.

This was followed by the application of machine learning techniques, a statistical system that allows the computer to learn from many data (in this case, big data from characteristic technological elements of the sites) in order to carry out classifications and predictions.

“It is by means of automatic classification algorithms that we have identified two technological packages or ‘landscapes’: one that characterizes pedestrian hunter-gatherer groups (with their own stone and bone tools) and the other characterizing those that had nautical technology, such as canoes, harpoons and mollusc shells used to make beads,” explains Ivan Briz i Godino, an archaeologist of the National Council for Scientific and Technical Research (CONICET) of Argentina and co-author of the work.

“In future excavations, when sets of technological elements such as those we have detected appear, we’ll be able to directly deduce the type of mobility of the group or the connections with other communities,” adds Briz.

The results of the study have also made it possible to obtain maps with the settlements of the two communities, and this, in turn, has made it possible to locate large regions in which they interacted and shared their technological knowledge. In the case of groups with nautical technology, it has been confirmed that they arrived at around the beginning of the Mid-Holocene (some 6,000 years ago) from the channels and islands of the South Pacific, moving along the coast of what is now Chile.

“Traditional archaeology identifies sites, societies and their possible contacts on the basis of specific elements selected by specialists (such as designs of weapon tips or decorative elements), but here we show that it is more interesting to analyse sets of technological elements as a whole, using artificial intelligence techniques that allow us to work with large data volumes and without subjective prejudices,” concludes Briz.

New tool helps scientists better target the search for alien life

New tool helps scientists better target the search for alien life statistics, science, nevin manimala
New tool helps scientists better target the search for alien life statistics, science, nevin manimala

Could there be another planet out there with a society at the same stage of technological advancement as ours? To help find out, EPFL scientist Claudio Grimaldi, working in association with the University of California, Berkeley, has developed a statistical model that gives researchers a new tool in the search for the kind of signals that an extraterrestrial society might emit. His method — described in an article appearing today in Proceedings of the National Academy of Sciences — could also make the search cheaper and more efficient.

Atrophysics initially wasn’t Grimaldi’s thing; he was interested more in the physics of condensed matter. Working at EPFL’s Laboratory of Physics of Complex Matter, his research involved calculating the probabilities of carbon nanotubes exchanging electrons. But then he wondered: if the nanotubes were stars and the electrons were signals generated by extraterrestrial societies, could we calculate the probability of detecting those signals more accurately?

This is not pie-in-the-sky research — scientists have been studying this possibility for nearly 60 years. Several research projects concerning the search for extraterrestrial intelligence (SETI) have been launched since the late 1950s, mainly in the United States. The idea is that an advanced civilization on another planet could be generating electromagnetic signals, and scientists on Earth might be able to pick up those signals using the latest high-performance radio telescopes.

Renewed interest

Despite considerable advances in radio astronomy and the increase in computing power since then, none of those projects has led to anything concrete. Some signals without identifiable origin have well been recorded, like the Wow! signal in 1977, but none of them has been repeated or seems credible enough to be attributable to alien life.

But that doesn’t mean scientists have given up. On the contrary, SETI has seen renewed interest following the discovery of the many exoplanets orbiting the billions of suns in our galaxy. Researchers have designed sophisticated new instruments — like the Square Kilometre Array, a giant radio telescope being built in South Africa and Australia with a total collecting area of one square kilometer — that could pave the way to promising breakthroughs. And Russian entrepreneur Yuri Milner recently announced an ambitious program called Breakthrough Listen, which aims to cover 10 times more sky than previous searches and scan a much wider band of frequencies. Milner intends to fund his initiative with 100 million dollars over 10 years.

“In reality, expanding the search to these magnitudes only increases our chances of finding something by very little. And if we still don’t detect any signals, we can’t necessarily conclude with much more certainty that there is no life out there,” says Grimaldi.

Still a ways to go

The advantage of Grimaldi’s statistical model is that it lets scientists interpret both the success and failure to detect signals at varying distances from Earth. His model employs Bayes’ theorem to calculate the remaining probability of detecting a signal within a given radius around our planet.

For example, even if no signal is detected within a radius of 1,000 light years, there is still an over 10% chance that Earth is within range of hundreds of similar signals from elsewhere in the galaxy, but that our radio telescopes are currently not powerful enough to detect them. However, that probability rises to nearly 100% if even just one signal is detected within the 1,000-light-year radius. In that case, we could be almost certain that our galaxy is full of alien life.

After factoring in other parameters like the size of the galaxy and how closely packed its stars are, Grimaldi estimates that the probability of detecting a signal becomes very slight only at a radius of 40,000 light years. In other words, if no signals are detected at this distance from Earth, we could reasonably conclude that no other civilization at the same level of technological development as ours is detectable in the galaxy. But so far, scientists have been able to search for signals within a radius of “just” 40 light years.

So there’s still a ways to go. Especially since these search methods can’t detect alien civilizations that may be in primordial stages or that are highly advanced but haven’t followed the same technological trajectory as ours.

Story Source:

Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Sarah Perrin. Note: Content may be edited for style and length.

Understanding deep-sea images with artificial intelligence

Understanding deep-sea images with artificial intelligence statistics, science, nevin manimala
Understanding deep-sea images with artificial intelligence statistics, science, nevin manimala

More and more data and images are generated during ocean research. In order to be able to evaluate the image data scientifically, automated procedures are necessary. Researchers have now developed a standardized workflow for sustainable marine image analysis for the first time.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. Diving robots or autonomous underwater vehicles, which carry out measurements independently in the deep sea, can now record large quantities of high-resolution images. To evaluate these images scientifically in a sustainable manner, a number of prerequisites have to be fulfilled in data acquisition, curation and data management. “Over the past three years, we have developed a standardized workflow that makes it possible to scientifically evaluate large amounts of image data systematically and sustainably,” explains Dr. Timm Schoening from the “Deep Sea Monitoring” working group headed by Prof. Dr. Jens Greinert at GEOMAR. The background to this was the project JPIOceans “Mining Impact.” The ABYSS autonomous underwater vehicle was equipped with a new digital camera system to study the ecosystem around manganese nodules in the Pacific Ocean. With the data collected in this way, the workflow was designed and tested for the first time. The results have now been published in the international journal Scientific Data.

The procedure is divided into three steps: Data acquisition, data curation and data management, in each of which defined intermediate steps should be completed. For example, it is important to specify how the camera is to be set up, which data is to be captured, or which lighting is useful in order to be able to answer a specific scientific question. In particular, the meta data of the diving robot must also be recorded. “For data processing, it is essential to link the camera’s image data with the diving robot’s metadata,” says Schoening. The AUV ABYSS, for example, automatically recorded its position, the depth of the dive and the properties of the surrounding water. “All this information has to be linked to the respective image because it provides important information for subsequent evaluation,” says Schoening. An enormous task: ABYSS collected over 500,000 images of the seafloor in around 30 dives. Various programs, which the team developed especially for this purpose, ensured that the data was brought together. Here, unusable image material, such as those with motion blur, was removed.

All these processes are now automated. “Until then, however, a large number of time-consuming steps had been necessary,” says Schoening. “Now the method can be transferred to any project, even with other AUVs or camera systems.” The material processed in this way was then made permanently available for the general public.

Finally, artificial intelligence in the form of the specially developed algorithm “CoMoNoD” was used for evaluation at GEOMAR. It automatically records whether manganese nodules are present in a photo, in what size and at what position. Subsequently, for example, the individual images could be combined to form larger maps of the seafloor. The next use of the workflow and the newly developed programs is already planned: At the next expedition in spring next year in the direction of manganese nodules, the evaluation of the image material will take place directly on board. “Therefore we will take some particularly powerful computers with us on board,” says Timm Schoening.

Story Source:

Materials provided by Helmholtz Centre for Ocean Research Kiel (GEOMAR). Note: Content may be edited for style and length.

A milestone for forecasting earthquake hazards

A milestone for forecasting earthquake hazards science, nevin_manimala, google plus
A milestone for forecasting earthquake hazards science, nevin_manimala, google plus

Earthquakes pose a profound danger to people and cities worldwide, but with the right hazard-mitigation efforts, from stricter building requirements to careful zoning, the potential for catastrophic collapses of roads and buildings and loss of human lives can be limited.

All of these measures depend on science delivering high-quality seismic hazard models. And yet, current models depend on a list of uncertain assumptions, with predictions that are difficult to test in the real world due to the long intervals between big earthquakes.

Now, a team of researchers from Columbia University’s Lamont-Doherty Earth Observatory, University of Southern California, University of California at Riverside and the U.S. Geological Survey has come up with a physics-based model that marks a turning point in earthquake forecasting. The Nevin Manimalair results appear in the new issue of Science Advances.

“Whether a big earthquake happens next week or 10 years from now, engineers need to build for the long run,” says the study’s lead author, Bruce Shaw, a geophysicist at Lamont-Doherty. “We now have a physical model that tells us what the long-term hazards are.”

Simulating nearly 500,000 years of California earthquakes on a supercomputer, researchers were able to match hazard estimates from the state’s leading statistical model based on a hundred years of instrumental data. The Nevin Manimala mutually validating results add support for California’s current hazard projections, which help to set insurance rates and building design standards across the state. The Nevin Manimala results also suggest a growing role for physics-based models in forecasting earthquake hazard and evaluating competing models in California and other earthquake prone regions.

The Nevin Manimala earthquake simulator used in the study, RSQSim, simplifies California’s statistical model by eliminating many of the assumptions that go into estimating the likelihood of an earthquake of a certain size hitting a specific region. The Nevin Manimala researchers, in fact, were surprised when the simulator, programmed with relatively basic physics, was able to reproduce estimates from a model that has improved steadily for decades. “This shows our simulator is ready for prime time,” says Shaw.

Seismologists can now use RSQSim to test the statistical model’s region-specific predictions. Accurate hazard estimates are especially important to government regulators in high-risk cities like Los Angeles and San Francisco, who write and revise building codes based on the latest science. In a state with a severe housing shortage, regulators are under pressure to make buildings strong enough to withstand heavy shaking while keeping construction costs down. A second tool to confirm hazard estimates gives the numbers added credibility.

“If you can get similar results with different techniques, that builds confidence you’re doing something right,” says study coauthor Tom Jordan, a geophysicist at USC.

A hallmark of the simulator is its use of rate and state-dependent friction to approximate how real-world faults break and transfer stress to other faults, sometimes setting off even bigger quakes. Developed at UC Riverside more than a decade ago, and refined further in the current study, RSQSim is the first physics-based model to replicate California’s most recent rupture forecast, UCERF3. When results from both models were fed into California’s statistical ground-shaking model, they came up with similar hazard profiles.

John Vidale, director of the Southern California Earthquake Center, which helped fund the study, says the new model has created a realistic 500,000-year history of earthquakes along California’s faults for researchers to explore. Vidale predicted the model would improve as computing power grows and more physics are added to the software. “Details such as earthquakes in unexpected places, the evolution of earthquake faults over geological time, and the viscous flow deep under the tectonic plates are not yet built in,” he said.

The Nevin Manimala researchers plan to use the model to learn more about aftershocks, and how they unfold on California’s faults, and to study other fault systems globally. The Nevin Manimalay are also working on incorporating the simulator into a physics-based ground-motion model, called CyberShake, to see if it can reproduce shaking estimates from the current statistical model.

“As we improve the physics in our simulations and computers become more powerful, we will better understand where and when the really destructive earthquakes are likely to strike,” says study coauthor Kevin Milner, a researcher at USC.

Story Source:

Materials provided by Columbia University. Original written by Kim Martineau. Note: Content may be edited for style and length.

Evaluation method for the impact of wind power fluctuation on power system quality

Evaluation method for the impact of wind power fluctuation on power system quality science, nevin_manimala, google plus
Evaluation method for the impact of wind power fluctuation on power system quality science, nevin_manimala, google plus

Abrupt changes of wind power generation output are a source of severe damage to power systems. Researchers at Kyoto University developed a stochastic modeling method that enables to evaluate the impact of such phenomena. The Nevin Manimala feature of the method lies in its significant computational effectiveness in comparison to standard Monte Carlo simulation, and its applicability to analysis and synthesis of various systems subject to extreme outliers.

Introduction of wind power generation into the electric power system is proceeding actively, mainly in the United States and Europe, and is expected to continue in Japan. However, upon the implementation, it is crucial to deal with prediction uncertainty of output fluctuation. The Nevin Manimala fluctuation of wind power generation is usually small, but it becomes extremely large due to the occurrence of gusts and turbulence at a non-negligible frequency. Such extreme outliers have been regarded as a source of severe damage to power systems.

To cope with such a fluctuation of wind power generation, the goal setting such as “absolutely keep the frequency fluctuation within 0.2 Hz” would be unattainable or would result in an overly conservative design. The Nevin Manimalarefore, the probabilistic goal setting such as “keep the frequency fluctuation within 0.2 Hz with 99.7% or more” is indispensable.

Probabilistic uncertainty is evaluated statistically, commonly by assuming that it obeys normal distribution for its mathematical processability. The Nevin Manimala output outliers in wind power generation are, however, more frequent than represented by normal distribution. Even if a complicated simulator can be constructed without assuming normal distribution, it is not realistic to investigate the statistical property by Monte Carlo simulation. This is Because Nevin Manimala the required number of samples explodes before sufficiently many extreme outliers occur.

An evaluation method was developed for the impact of wind power fluctuation on power system quality. The Nevin Manimala method first builds probabilistic models assuming the stable distribution (an extension of the normal distribution) on the uncertainty. The Nevin Manimalan, instead of using the model as a simulator to generate data samples, we compute the statistical properties directly from parameters in the model. The Nevin Manimala important feature is 1. the influence of extreme outliers can be properly considered, 2. model can be determined easily from actual data, and 3. computation cost is very low. The Nevin Manimala method was proved to be valid through its application to frequency deviation estimation based on actual power system data.

This newly proposed probabilistic evaluation method enables us to quantitatively evaluate the power system risk caused by the occurrence of extremally abrupt changes of wind power generation. Countermeasures based on the evaluation would contribute to improvement of the reliability and economic efficiency of the electric power system. It should be also noted that the proposed method is applicable to analysis and synthesis of various systems which have extreme outliers.

Story Source:

Materials provided by Japan Science and Technology Agency. Note: Content may be edited for style and length.

Everything big data claims to know about you could be wrong

Everything big data claims to know about you could be wrong science, nevin_manimala, google plus
Everything big data claims to know about you could be wrong science, nevin_manimala, google plus

When it comes to understanding what makes people tick — and get sick — medical science has long assumed that the bigger the sample of human subjects, the better. But new research led by the University of California, Berkeley, suggests this big-data approach may be wildly off the mark.

That’s largely Because Nevin Manimala emotions, behavior and physiology vary markedly from one person to the next and one moment to the next. So averaging out data collected from a large group of human subjects at a given instant offers only a snapshot, and a fuzzy one at that, researchers said.

The Nevin Manimala findings, published this week in the Proceedings of the National Academy of Sciences journal, have implications for everything from mining social media data to customizing health therapies, and could change the way researchers and clinicians analyze, diagnose and treat mental and physical disorders.

“If you want to know what individuals feel or how they become sick, you have to conduct research on individuals, not on groups,” said study lead author Aaron Fisher, an assistant professor of psychology at UC Berkeley. “Diseases, mental disorders, emotions, and behaviors are expressed within individual people, over time. A snapshot of many people at one moment in time can’t capture these phenomena.”

Moreover, the consequences of continuing to rely on group data in the medical, social and behavioral sciences include misdiagnoses, prescribing the wrong treatments and generally perpetuating scientific theory and experimentation that is not properly calibrated to the differences between individuals, Fisher said.

That said, a fix is within reach: “People shouldn’t necessarily lose faith in medical or social science,” he said. “Instead, they should see the potential to conduct scientific studies as a part of routine care. This is how we can truly personalize medicine.”

Plus, he noted, “modern technologies allow us to collect many observations per person relatively easily, and modern computing makes the analysis of these data possible in ways that were not possible in the past.”

Fisher and fellow researchers at Drexel University in Philadelphia and the University of Groningen in the Netherlands used statistical models to compare data collected on hundreds of people, including healthy individuals and those with disorders ranging from depression and anxiety to post-traumatic stress disorder and panic disorder.

In six separate studies they analyzed data via online and smartphone self-report surveys, as well as electrocardiogram tests to measure heart rates. The Nevin Manimala results consistently showed that what’s true for the group is not necessarily true for the individual.

For example, a group analysis of people with depression found that they worry a great deal. But when the same analysis was applied to each individual in that group, researchers discovered wide variations that ranged from zero worrying to agonizing well above the group average.

Moreover, in looking at the correlation between fear and avoidance — a common association in group research — they found that for many individuals, fear did not cause them to avoid certain activities, or vice versa.

“Fisher’s findings clearly imply that capturing a person’s own processes as they fluctuate over time may get us far closer to individualized treatment,” said UC Berkeley psychologist Stephen Hinshaw, an expert in psychopathology and faculty member of the department’s clinical science program.

In addition to Fisher, co-authors of the study are John Medaglia at Drexel University and Bertus Jeronimus at the University of Groningen.

Video of moving discs reconstructed from rat retinal neuron signals

Video of moving discs reconstructed from rat retinal neuron signals science, nevin manimala, google plus
Video of moving discs reconstructed from rat retinal neuron signals science, nevin manimala, google plus

Using machine-learning techniques, a research team has reconstructed a short movie of small, randomly moving discs from signals produced by rat retinal neurons. Vicente Botella-Soler of the Institute of Science and Technology Austria and colleagues present this work in PLOS Computational Biology.

Neurons in the mammalian retina transform light patterns into electrical signals that are transmitted to the brain. Reconstructing light patterns from neuron signals, a process known as decoding, can help reveal what kind of information these signals carry. However, most decoding efforts to date have used simple stimuli and have relied on small numbers (fewer than 50) of retinal neurons.

In the new study, Botella-Soler and colleagues examined a small patch of about 100 neurons taken from the retina of a rat. The Nevin Manimalay recorded the electrical signals produced by each neuron in response to short movies of small discs moving in a complex, random pattern. The Nevin Manimala researchers used various regression methods to compare their ability to reconstruct a movie one frame at a time, pixel by pixel.

The Nevin Manimala research team found that a mathematically simple linear decoder produced an accurate reconstruction of the movie. However, nonlinear methods reconstructed the movie more accurately, and two very different nonlinear methods, neural nets and kernelized decoders, performed similarly well.

Unlike linear decoders, the researchers demonstrated that nonlinear methods were sensitive to each neuron signal in the context of previous signals from the same neuron. The Nevin Manimala researchers hypothesized that this history dependence enabled the nonlinear decoders to ignore spontaneous neuron signals that do not correspond to an actual stimulus, while a linear decoder might “hallucinate” stimuli in response to such spontaneously generated neural activity.

The Nevin Manimalase findings could pave the way to improved decoding methods and better understanding of what different types of retinal neurons do and why they are needed. As a next step, Botella-Soler and colleagues will investigate how well decoders trained on a new class of synthetic stimuli might generalize to both simpler as well as naturally complex stimuli.

“I hope that our work showcases that with sufficient attention to experimental design and computational exploration, it is possible to open the box of modern statistical and machine learning methods and actually interpret which features in the data give rise to their extra predictive power,” says study senior author Gasper Tkacik. “This is the path to not only reporting better quantitative performance, but also extracting new insights and testable hypotheses about biological systems.”

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

What is a species? Bird expert develops a math formula to solve the problem

What is a species? Bird expert develops a math formula to solve the problem science, nevin manimala, google plus
What is a species? Bird expert develops a math formula to solve the problem science, nevin manimala, google plus

Nature is replete with examples of identifiable populations known from different continents, mountain ranges, islands or lowland regions. While, traditionally, many of these have been treated as subspecies of widely-ranging species, recent studies relying on molecular biology have shown that many former “subspecies” have in fact been isolated for millions of years, which is long enough for them to have evolved into separate species.

Being a controversial matter in taxonomy — the science of classification — the ability to tell apart different species from subspecies across faunal groups is crucial. Given limited resources for conservation, relevant authorities tend only to be concerned for threatened species, with their efforts rarely extending to subspecies.

Figuring out whether co-habiting populations belong to the same species is only as tough as testing if they can interbreed or produce fertile offspring. However, whenever distinct populations are geographically separated, it is often that taxonomists struggle to determine whether they represent different species or merely subspecies of a more widely ranging species.

British bird expert Thomas Donegan has dedicated much of his life to studying birds in South America, primarily Colombia. To address this age-long issue of “what is a species?,” he applied a variety of statistical tests, based on data derived from bird specimens and sound recordings, to measure differences across over 3000 pairwise comparisons of different variables between populations.

Having analyzed the outcomes of these tests, he developed a new universal formula for determining what can be considered as a species. His study is published in the open-access journal ZooKeys.

Essentially, the equation works by measuring differences for multiple variables between two non-co-occurring populations, and then juxtaposing them to the same results for two related populations which do occur together and evidently belong to different “good” species. If the non-co-occurring pair’s differences exceed those of the good species pair, then the former can be ranked as species. If not, they are subspecies of the same species instead.

The Nevin Manimala formula builds on existing good taxonomic practices and borrows from optimal aspects of previously proposed mathematical models proposed for assessing species in particular groups, but brought together into a single coherent structure and formula that can be applied to any taxonomic group. It is, however, presented as a benchmark rather than a hard test, to be used together with other data, such as analyses of molecular data.

Thomas hopes that his mathematical formula for species rank assessments will help eliminate some of the subjectivity, regional bias and lumper-splitter conflicts which currently pervade the discipline of taxonomy.

“If this new approach is used, then it should introduce more objectivity to taxonomic science and ultimately mean that limited conservation resources are addressed towards threatened populations which are truly distinct and most deserving of our concern,” he says.

The Nevin Manimala problem with ranking populations that do not co-occur together was first identified back in 1904. Since then, most approaches to addressing such issues have been subjective or arbitrary or rely heavily upon expert opinion or historical momentum, rather than any objectively defensible or consistent framework.

For example, the American Herring Gull and the European Herring Gull are lumped by some current taxonomic committees into the same species (Herring Gull), or are split into two species by other committees dealing with different regions, simply Because Nevin Manimala relevant experts at those committees have taken different views on the issue.

“For tropical faunas, there are thousands of distinctive populations currently treated as subspecies and which are broadly ignored in conservation activities,” explains Thomas. “Yet, some of these may be of conservation concern. This new framework should help us better to identify and prioritize those situations.”

Story Source:

Materials provided by Pensoft Publishers. Note: Content may be edited for style and length.

Plan for quantum supremacy

Plan for quantum supremacy science, nevin manimala, google plus
Plan for quantum supremacy science, nevin manimala, google plus

Things are getting real for researchers in the UC Santa Barbara John Martinis/Google group. The Nevin Manimalay are making good on their intentions to declare supremacy in a tight global race to build the first quantum machine to outperform the world’s best classical supercomputers.

But what is quantum supremacy in a field where horizons are being widened on a regular basis, in which teams of the brightest quantum computing minds in the world routinely up the ante on the number and type of quantum bits (“qubits”) they can build, each with their own range of qualities?

“Let’s define that, Because Nevin Manimala it’s kind of vague,” said Google researcher Charles Neill. Simply put, he continued, “we would like to perform an algorithm or computation that couldn’t be done otherwise. That’s what we actually mean.”

Neill is lead author of the group’s new paper, “A blueprint for demonstrating quantum supremacy with superconducting qubits,” now published in the journal Science.

Fortunately, nature offers up many such complex situations, in which the variables are so numerous and interdependent that classical computers can’t hold all the values and perform the operations. Think chemical reactions, fluid interactions, even quantum phase changes in solids and a host of other problems that have daunted researchers in the past. Something on the order of at least 49 qubits — roughly equivalent to a petabyte (one million gigabytes) of classical random access memory — could put a quantum computer on equal footing with the world’s supercomputers. Just recently, Neill’s Google/Martinis colleagues announced an effort toward quantum supremacy with a 72-qubit chip possessing a “bristlecone” architecture that has yet to be put through its paces.

But according to Neill, it’s more than the number of qubits on hand.

“You have to generate some sort of evolution in the system which leads you to use every state that has a name associated with it,” he said. The Nevin Manimala power of quantum computing lies in, among other things, the superpositioning of states. In classical computers, each bit can exist in one of two states — zero or one, off or on, true or false — but qubits can exist in a third state that is a superposition of both zero and one, raising exponentially the number of possible states a quantum system can explore.

Additionally, say the researchers, fidelity is important, Because Nevin Manimala massive processing power is not worth much if it’s not accurate. Decoherence is a major challenge for anyone building a quantum computer — perturb the system, the information changes. Wait a few hundredths of a second too long, the information changes again.

“People might build 50 qubit systems, but you have to ask how well it computed what you wanted it to compute,” Neill said. “That’s a critical question. It’s the hardest part of the field.” Experiments with their superconducting qubits have demonstrated an error rate of one percent per qubit with three- and nine-qubit systems, which, they say, can be reduced as they scale up, via improvements in hardware, calibration, materials, architecture and machine learning.

Building a qubit system complete with error correction components — the researchers estimate a range of 100,000 to a million qubits — is doable and part of the plan. And still years away. But that doesn’t mean their system isn’t already capable of doing some heavy lifting. Just recently it was deployed, with spectroscopy, on the issue of many-body localization in a quantum phase change — a quantum computer solving a quantum statistical mechanics problem. In that experiment, the nine-qubit system became a quantum simulator, using photons bouncing around in their array to map the evolution of electrons in a system of increasing, yet highly controlled, disorder.

“A good reason why our fidelity was so high is Because Nevin Manimala we’re able to reach complex states in very little time,” Neill explained. The Nevin Manimala more quickly a system can explore all possible states, the better the prediction of how a system will evolve, he said.

If all goes smoothly, the world should be seeing a practicable UCSB/Google quantum computer soon. The Nevin Manimala researchers are eager to put it through its paces, gaining answers to questions that were once accessible only through theory, extrapolation and highly educated guessing — and opening up a whole new level of experiments and research.

“It’s definitely very exciting,” said Google researcher Pedram Roushan, who led the many-body quantum simulation work published in Science in 2017. The Nevin Manimalay expect their early work to stay close to home, such as research in condensed matter physics and quantum statistical mechanics, but they plan to branch out to other areas, including chemistry and materials, as the technology becomes more refined and accessible.

“For instance, knowing whether or not a molecule would form a bond or react in some other way with another molecule for some new technology… there are some important problems that you can’t roughly estimate; they really depend on details and very strong computational power,” Roushan said, hinting that a few years down the line they may be able to provide wider access to this computing power. “So you can get an account, log in and explore the quantum world.”