Coincidence helps with quantum measurements

Coincidence helps with quantum measurements statistics, science, nevin manimala
Coincidence helps with quantum measurements statistics, science, nevin manimala

Quantum phenomena are experimentally difficult to deal with, the effort increases dramatically with the size of the system. For some years now, scientists are capable of controlling small quantum systems and investigating quantum properties. Such quantum simulations are considered promising early applications of quantum technologies that could solve problems where simulations on conventional computers fail. However, the quantum systems used as quantum simulators must continue to grow. The entanglement of many particles is still a phenomenon that is difficult to understand. “In order to operate a quantum simulator consisting of ten or more particles in the laboratory, we must characterize the states of the system as accurately as possible,” explains Christian Roos from the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences.

So far, quantum state tomography has been used for the characterization of quantum states, with which the system can be completely described. This method, however, involves a very high measuring and computing effort and is not suitable for systems with more than half a dozen particles. Two years ago, the researchers led by Christian Roos, together with colleagues from Germany and Great Britain, presented a very efficient method for the characterization of complex quantum states. However, only weakly entangled states can be described with this method. This issue is now circumvented by a new method presented last year by the theorists led by Peter Zoller, which can be used to characterize any entangled state. Together with experimental physicists Rainer Blatt and Christian Roos and their team, they have now demonstrated this method in the laboratory.

Quantum simulations on larger systems

“The new method is based on the repeated measurement of randomly selected transformations of individual particles. The statistical evaluation of the measurement results then provides information about the degree of entanglement of the system,” explains Andreas Elben from Peter Zoller’s team. The Austrian physicists demonstrated the process in a quantum simulator consisting of several ions arranged in a row in a vacuum chamber. Starting from a simple state, the researchers let the individual particles interact with the help of laser pulses and thus generate entanglement in the system. “We perform 500 local transformations on each ion and repeat the measurements a total of 150 times in order to then be able to use statistical methods to determine information about the entanglement state from the measurement results,” explains PhD student Tiff Brydges from the Institute of Quantum Optics and Quantum Information.

In the work now published in Science, the Innsbruck physicists characterize the dynamic development of a system consisting of ten ions as well as a subsystem consisting of ten ions of a 20-ion chain. “In the laboratory, this new method helps us a lot because it enables us to understand our quantum simulator even better and, for example, to assess the purity of the entanglement more precisely,” says Christian Roos, who assumes that the new method can be successfully applied to quantum systems with up to several dozen particles.

The scientific work was published in the journal Science and financially supported by the European Research Council ERC and the Austrian Science Fund FWF. “This publication shows once again the fruitful cooperation between the theoretical physicists and the experimental physicists here in Innsbruck,” emphasizes Peter Zoller. “At the University of Innsbruck and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, young researchers from both fields find very good conditions for research work that is competitive worldwide.”

Story Source:

Materials provided by University of Innsbruck. Note: Content may be edited for style and length.

Study finds natural variation in sex ratios at birth and sex ratio inflation in 12 countries

An international team of researchers, led by UMass Amherst biostatistician Leontine Alkema and her former Ph.D. student Fengqing Chao, developed a new estimation method for assessing natural variations in the sex ratio at birth (SRB) for all countries in the world.

In the study published Monday, April 15 in Proceedings of the National Academy of Sciences (PNAS), the researchers found natural variation in regional baseline SRBs that differ from the previously held standard baseline male-to-female ratio of 1.05 for most regions.

They also identified 12 countries with strong evidence of sex ratio at birth imbalances, or sex ratio inflation, due to sex-selective abortions and a preference for sons.

“Given that sex ratios at birth are still inflated in some countries and could increase in the future in other countries, the monitoring of the sex ratio at birth and how it compares with expected baseline levels is incredibly important to inform policy and programs when sex ratio inflations are detected,” says Alkema, associate professor in the School of Public Health and Health Sciences.

Alkema adds, “While prior studies have shown differences in the sex ratio at birth, for example based on ethnicity in population subgroups, there is no prior study that we know of that has assessed regional baseline levels of the SRB. When we did the estimation exercise, after excluding data that may have been affected by masculinization of the sex ratio at birth, we found that regional levels differed from the commonly assumed 1.05 in several regions.”

Estimated regional reference levels ranged from 1.031 in sub-Saharan Africa to 1.063 in southeastern Asia and eastern Asia, and 1.067 in Oceania.

Alkema regularly collaborates with United Nations agencies to develop statistical models to assess and interpret demographic and population-level health trends globally. For this study, along with Alkema at UMass Amherst, researchers at the National University of Singapore and the United Nations Population Division compiled a database from vital registries, censuses and surveys with 10,835 observations — 16,602 country-years of information from 202 countries. They developed Bayesian statistical methods to estimate the sex ratio at birth for all countries from 1950 to 2017.

“We found that the SRB imbalance in 12 countries since 1970 is associated with 23.1 million fewer female births than expected,” Alkema says.

The majority of the missing female births were in China, with 11.9 million, and India, with 10.6 million. The other countries identified with SRB imbalance were Albania, Armenia, Azerbaijan, Georgia, Hong Kong, Republic of Korea, Montenegro, Taiwan, Tunisia and Vietnam.

Story Source:

Materials provided by University of Massachusetts at Amherst. Note: Content may be edited for style and length.

Scientists build a machine to see all possible futures

Scientists build a machine to see all possible futures statistics, science, nevin manimala
Scientists build a machine to see all possible futures statistics, science, nevin manimala

In the 2018 movie Infinity War, a scene featured Dr. Strange looking into 14 million possible futures to search for a single timeline where the heroes would be victorious. Perhaps he would have had an easier time with help from a quantum computer. A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) and Griffith University in Australia has constructed a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.

“When we think about the future, we are confronted by a vast array of possibilities,” explains Assistant Professor Mile Gu of NTU Singapore, who led development of the quantum algorithm that underpins the prototype “These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.” What he and his research group realised, however, was that a quantum computer can examine all possible futures by placing them in a quantum superposition — similar to Schrödinger’s famous cat that is simultaneously alive and dead.

To realise this scheme, they joined forces with the experimental group led by Professor Geoff Pryde at Griffith University. Together, the team implemented a specially devised photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons — quantum particles of light. They then demonstrated that the state of the quantum device was a superposition of multiple potential futures, weighted by their probability of occurrence.

“The functioning of this device is inspired by the Nobel Laureate Richard Feynman,” says Dr Jayne Thompson, a member of the Singapore team. “When Feynman started studying quantum physics, he realized that when a particle travels from point A to point B, it does not necessarily follow a single path. Instead, it simultaneously transverses all possible paths connecting the points. Our work extends this phenomenon and harnesses it for modelling statistical futures.”

The machine has already demonstrated one application — measuring how much our bias towards a specific choice in the present impacts the future. “Our approach is to synthesise a quantum superposition of all possible futures for each bias.” explains Farzad Ghafari, a member of the experimental team, “By interfering these superpositions with each other, we can completely avoid looking at each possible future individually. In fact, many current artificial intelligence (AI) algorithms learn by seeing how small changes in their behaviour can lead to different future outcomes, so our techniques may enable quantum enhanced AIs to learn the effect of their actions much more efficiently.”

The team notes while their present prototype simulates at most 16 futures simultaneously, the underlying quantum algorithm can in principle scale without bound. “This is what makes the field so exciting,” says Pryde. “It is very much reminiscent of classical computers in the 1960s. Just as few could imagine the many uses of classical computers in the 1960s, we are still very much in the dark about what quantum computers can do. Each discovery of a new application provides further impetus for their technological development.”

Story Source:

Materials provided by Nanyang Technological University, College of Science. Note: Content may be edited for style and length.

The cost of computation

For decades, physicists have wrestled with understanding the thermodynamic cost of manipulating information, what we would now call computing. How much energy does it take, for example, to erase a single bit from a computer? What about more complicated operations? These are pressing, practical questions, as artificial computers are energy hogs, claiming an estimated four percent of total energy consumed in the United States.

These questions are not limited to the digital machines constructed by us. The human brain can be seen as a computer — one that gobbles an estimated 10 to 20 percent of all the calories a person consumes. Living cells, too, can be viewed as computers, but computers that “are many orders of magnitude more efficient” than any laptop or smartphone humans have constructed, says David Wolpert of the Santa Fe Institute.

Wolpert, a mathematician, physicist, and computer scientist, has been on the frontlines of a rapid resurgence of interest in a deep understanding of the energy cost of computing. That research is now hitting its stride, thanks to advances in using some revolutionary tools recently developed in statistical physics, in order to understand the thermodynamic behavior of nonequilibrium systems. The reason these tools are so important is that computers are decidedly nonequilibrium systems. (Unplug your laptop and wait for it to reach equilibrium, and then see if it still works.) Although Wolpert primarily approaches these issues using tools from computer science and physics, there is also sharp interest from researchers in other areas, including those who study chemical reactions, cellular biology, and neurobiology.

However, research in nonequilibrium statistical physics largely happens in silos, says Wolpert. In a review published today in the Journal of Physics A, Wolpert collects recent advances in understanding the thermodynamics of computation that are grounded in computer science and physics. The review functions as a sort of state-of-the-science report for a burgeoning interdisciplinary investigation.

“It is basically a snapshot of the current state of the fields, where these ideas are starting to explode, in all directions,” says Wolpert.

In the paper, Wolpert first summarizes the relevant theoretical ideas from physics and computer science. He then discusses what’s known about the entropic cost of a range of computations, from erasing a single bit to running a Turing machine. He goes on to show how breakthroughs in nonequilibrium statistical physics have enabled researchers to more formally probe those cases — moving far beyond simple bit erasure.

Wolpert also touches on the questions raised in this recent research which suggest real-world challenges, like how to design algorithms with energy conservation in mind. Can biological systems, for example, serve as inspiration for designing computers with minimal thermodynamic cost?

“We are being surprised and astonished in many ways,” Wolpert says. In putting together the review, and coediting a book on the topic due out later this year, “we’ve uncovered phenomena that no one has analyzed before that were very natural to us, as we pursue this modern version of the thermodynamics of computation.”

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.

Artificial intelligence can predict premature death, study finds

Artificial intelligence can predict premature death, study finds statistics, science, nevin manimala
Artificial intelligence can predict premature death, study finds statistics, science, nevin manimala

Computers which are capable of teaching themselves to predict premature death could greatly improve preventative healthcare in the future, suggests a new study by experts at the University of Nottingham.

The team of healthcare data scientists and doctors have developed and tested a system of computer-based ‘machine learning’ algorithms to predict the risk of early death due to chronic disease in a large middle-aged population.

They found this AI system was very accurate in its predictions and performed better than the current standard approach to prediction developed by human experts. The study is published by PLOS ONE in a special collections edition of “Machine Learning in Health and Biomedicine.”

The team used health data from just over half a million people aged between 40 and 69 recruited to the UK Biobank between 2006 and 2010 and followed up until 2016.

Leading the work, Assistant Professor of Epidemiology and Data Science, Dr Stephen Weng, said: “Preventative healthcare is a growing priority in the fight against serious diseases so we have been working for a number of years to improve the accuracy of computerised health risk assessment in the general population. Most applications focus on a single disease area but predicting death due to several different disease outcomes is highly complex, especially given environmental and individual factors that may affect them.

“We have taken a major step forward in this field by developing a unique and holistic approach to predicting a person’s risk of premature death by machine-learning. This uses computers to build new risk prediction models that take into account a wide range of demographic, biometric, clinical and lifestyle factors for each individual assessed, even their dietary consumption of fruit, vegetables and meat per day.

“We mapped the resulting predictions to mortality data from the cohort, using Office of National Statistics death records, the UK cancer registry and ‘hospital episodes’ statistics. We found machine learned algorithms were significantly more accurate in predicting death than the standard prediction models developed by a human expert.”

The AI machine learning models used in the new study are known as ‘random forest’ and ‘deep learning’. These were pitched against the traditionally-used ‘Cox regression’ prediction model based on age and gender — found to be the least accurate at predicting mortality — and also a multivariate Cox model which worked better but tended to over-predict risk.

Professor Joe Kai, one of the clinical academics working on the project, said: “There is currently intense interest in the potential to use ‘AI’ or ‘machine-learning’ to better predict health outcomes. In some situations we may find it helps, in others it may not. In this particular case, we have shown that with careful tuning, these algorithms can usefully improve prediction.

“These techniques can be new to many in health research, and difficult to follow. We believe that by clearly reporting these methods in a transparent way, this could help with scientific verification and future development of this exciting field for health care.”

This new study builds on previous work by the Nottingham team which showed that four different AI algorithms, ‘random forest’, ‘logistic regression’, ‘gradient boosting’ and ‘neural networks’, were significantly better at predicting cardiovascular disease than an established algorithm used in current cardiology guidelines. This earlier study is available here.

The Nottingham researchers predict that AI will play a vital part in the development of future tools capable of delivering personalised medicine, tailoring risk management to individual patients. Further research requires verifying and validating these AI algorithms in other population groups and exploring ways to implement these systems into routine healthcare.

Story Source:

Materials provided by University of Nottingham. Note: Content may be edited for style and length.

New computational tool could change how we study pathogens

New computational tool could change how we study pathogens statistics, science, nevin manimala
New computational tool could change how we study pathogens statistics, science, nevin manimala

A sophisticated new analysis tool developed by Florida State University scientists may signal a new era in the study of population genetics.

Their model, which incorporates advanced mathematical strategies, could help revolutionize the way researchers investigate the spread and distribution of dangerous, fast-evolving disease vectors.

The breakthrough research was an interdisciplinary collaboration between postdoctoral mathematician Somayeh Mashayekhi and computational biologist Peter Beerli, both in FSU’s Department of Scientific Computing. Their findings were published in the journal Proceedings of the National Academy of Sciences.

“Ours is the first application of fractional calculus to population genetics,” Beerli said. “This will help us to give better estimates of quantities that may be important to combat pathogens.”

The team’s model, called the f-coalescent for its novel use of fractional calculus, follows in the lineage of a similar but more limited model called the n-coalescent. Proposed by the British mathematician John Kingman in 1982, the n-coalescent allowed scientists to make statistical statements about a population’s past using data collected in the present.

“The n-coalescent introduced a retrospective view of relationships among individuals,” Beerli said.

It allowed researchers to use genomic samples from a population to make probabilistic statements about the origins of different gene variants within that population. This gave scientists unprecedentedly rigorous insight into the scenarios and interactions that helped shape variability in a species over time.

But for all its groundbreaking theoretical advantages, the n-coalescent had one major hindrance: The model operated under the assumption that populations are homogeneous. That is, it assumed each individual shared identical experiences, with the same adversities that threaten their survival and the same benefits that give them a competitive leg up.

This is where the FSU team’s new f-coalescent advances on its predecessor. Their model allows for increased environmental heterogeneity, specifically in location and time intervals. These allowances help yield clearer pictures of when different genetic variations emerge — information that is critical in the analysis of pathogens that evolve rapidly in response to different environments.

In their study, Beerli and Mashayekhi applied the f-coalescent to three real datasets: mitochondria sequence data of humpback whales, mitochondrial data of a malaria parasite and the complete genome data of an H1N1 influenza virus strain.

They found that while environmental heterogeneity seemed to have little effect in the humpback whale dataset, the influenza and malaria data suggested that heterogeneity should be considered when evaluating pathogens that evolve quickly due to changing selection pressures.

“Heterogeneity has effects on the timing in the genealogy,” Mashayekhi said. “The f-coalescent will result in better estimates of this timing, which will lead to important changes in the analysis of pathogens.”

While the f-coalescent offers a promising new method for improving our understanding of the variable and dynamic development of these pathogens, researchers said the model needs to be broadened even more in order to account for the many factors that can influence shifting populations.

“We need to expand our theory beyond a single population and include immigration in the model,” Beerli said. “Only then can we attack problems such as changes in the distribution of influenza or other fast-evolving pathogens.”

The research was funded by the National Science Foundation.

Story Source:

Materials provided by Florida State University. Original written by Zachary Boehm. Note: Content may be edited for style and length.

How measurable is online advertising?

Researchers from Northwestern University and Facebook in March published new research in the INFORMS journal Marketing Science that sheds light on whether common approaches for online advertising measurement are as reliable and accurate as the “gold standard” of large-scale, randomized experiments.

The study to be published in the March edition of the INFORMS journal Marketing Science is titled “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook,” and is authored by Brett Gordon of Northwestern University; Florian Zettelmeyer of Northwestern University and the National Bureau of Economic Research; and Neha Bhargava and Dan Chapsky of Facebook.

“Our findings suggest that commonly used observational approaches that rely on data usually available to advertisers often fail to accurately measure the true effect of advertising,” said Brett Gordon.

Observational approaches are those that encompass a broad class of statistical models that rely on the data “as they are,” generated without explicit manipulation through a randomized experiment.

“We found a significant difference in the ad effectiveness obtained from randomized control trials and those observational methods that are frequently used by advertisers to evaluate their campaigns,” added Zettelmeyer. “Generally, the current and more common methods overestimate ad effectiveness relative to what we found in our randomized tests. Though in some cases, they significantly underestimate effectiveness.”

Measuring the effectiveness of advertising remains an important problem for many firms. A key question is whether an advertising campaign produced incremental outcomes: did more consumers purchase because they saw an ad, or would many of those consumers have purchased even in the absence of the ad? Obtaining an accurate measure of incremental outcomes (“conversions”) helps an advertiser calculate the return on investment (ROI) of the campaign.

“Digital platforms that carry advertising, such as Facebook, have created comprehensive means to assess ad effectiveness, using granular data that link ad exposures, clicks, page visit, online purchases and even offline purchases,” said Gordon. “Still, even with these data, measuring the causal effect of advertising requires the proper experimentation platform.”

The study authors used data from 15 U.S. advertising experiments at Facebook comprising 500 million user-experiment observations and 1.6 billion ad impressions.

Facebook’s “conversion lift” experimentation platform provides advertisers with the ability to run randomized controlled experiments to measure the causal effect of an ad campaign on consumer outcomes.

These experiments randomly allocate users to a control group, who are never exposed to the ad, and to a test group, who are eligible to see the ad. Comparing outcomes between the groups provides the causal effect of the ad because randomization ensures the two groups are, on average, equivalent except for advertising exposures in the test group. The experimental results from each ad campaign served as a baseline with which to evaluate common observational methods.

Observational methods compare outcomes between users who were exposed to the ad to users who were unexposed. These two groups of users tend to differ systematically in many ways, such as age and gender. These differences in characteristics may be observable because the advertiser (or its advertising platform) often has access data on these characteristics and others, e.g., in addition to knowing the gender and age of an online user, it is possible to observe the type of device being used, the location of the user, how long it’s been since the user last visited, etc. However, the tricky part is that the exposed and unexposed groups may also differ in ways that are very difficult to measure, such as the users underlying affinity for the brand. To say that the ad “caused” an effect requires the research to be able to account for both observed and unobserved differences between the two groups. Observational methods use data on the characteristics of the users that are observed in attempt to adjust for both the observable and unobservable differences.

“We set out to determine whether, as commonly believed, current observational methods using comprehensive individual-level data are ‘good enough’ for ad measurement,” said Zettelmeyer. “What we found was that even fairly comprehensive data prove inadequate to yield reliable estimates of advertising effects.”

“In principle, we believe that using large-scale randomized controlled trials to evaluate advertising effectiveness should be the preferred method for advertisers whenever possible.”

Calling time on ‘statistical significance’ in science research

Scientists should stop using the term ‘statistically significant’ in their research, urges this editorial in a special issue of The American Statistician published today.

The issue, Statistical Inference in the 21st Century: A World Beyond P<0.05, calls for an end to the practice of using a probability value (p-value) of less than 0.05 as strong evidence against a null hypothesis or a value greater than 0.05 as strong evidence favoring a null hypothesis. Instead, p-values should be reported as continuous quantities and described in language stating what the value means in the scientific context.

Containing 43 papers by statisticians from around the world, the special issue is expected to lead to a major rethinking of statistical inference by initiating a process that ultimately moves statistical science — and science itself — into a new age.

In the issue’s editorial, Dr. Ronald Wasserstein, Executive Director of the ASA, Dr. Allen Schirm, retired from Mathematica Policy Research, and Professor Nicole Lazar of the University of Georgia said: “Based on our review of the articles in this special issue and the broader literature, we conclude that it is time to stop using the term ‘statistically significant’ entirely.

“No p-value can reveal the plausibility, presence, truth, or importance of an association or effect. Therefore, a label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical non-significance lead to the association or effect being improbable, absent, false, or unimportant.

“For the integrity of scientific publishing and research dissemination, therefore, whether a p-value passes any arbitrary threshold should not be considered at all when deciding which results to present or highlight.”

Articles in the special issue suggest alternatives and complements to p-values, and highlight the need for widespread reform of editorial, educational and institutional practices [quotes below].

While there is no single solution to replacing the outsized role that statistical significance has come to play in science, solid principles for the use of statistics do exist, say the editorial’s authors.

“The statistical community has not yet converged on a simple paradigm for the use of statistical inference in scientific research — and in fact it may never do so,” they acknowledge. “A one-size-fits-all approach to statistical inference is an inappropriate expectation. Instead, we recommend scientists conducting statistical analysis of their results should adopt what we call the ATOM model: Accept uncertainty, be Thoughtful, be Open, be Modest.”

This ASA special issue builds on the highly influential ASA Statement on P-Values and Statistical Significance which has had over 293,000 downloads and 1,700 citations, an average of over 10 per week since its release in 2016.

Story Source:

Materials provided by Taylor & Francis Group. Note: Content may be edited for style and length.

Big stats, human stories change attitudes about global issues

New research from Cornell University sheds light on the types of statistical and narrative evidence that are most effective at persuading people to pay attention to global issues.

Study co-authors Adam Levine, associate professor of government, and Yanna Krupnikov of Stony Brook University wanted to understand what makes people care about social and economic problems they may not necessarily face in their daily lives, and whether that concern is a function of how the problems are described.

The team looked at several types of evidence showing that a problem exists. For example, statistics can describe the magnitude of the problem or they can be phrased in percentage terms — such as the percentage of people facing a problem. They designed a series of studies to test which type of evidence increased people’s engagement, either by making a donation, paying attention to an email or stating a concern.

The research was conducted in collaboration with a nonprofit in Ithaca, New York, that strives to increase access to affordable health care, including funding for a free clinic.

In the studies, likely new donors received solicitations by mail, members of the organization received a solicitation email, and study participants unaffiliated with the organization took a survey gauging their interest in access to affordable health care.

The messaging used in solicitations included combinations of high percentages, low percentages, case studies and raw numbers to describe the magnitude of the uninsured who can’t afford health care.

For example, the potential donors received either a standard letter, one saying 57 percent of uninsured people couldn’t afford the care they need, or one describing how a real uninsured person benefited from the nonprofit’s services.

Across the board, the percentage-based evidence and human interest evidence tended to drive engagement, but talking about the overall magnitude of the problem didn’t.

“When you talk about the millions of children who are starving, or the millions of refugees who are seeking out a better life, it fails to have this emotional connection that tends to then motivate people to pay more attention and to become engaged,” Levine said.

The study offers a model of what a meaningful collaboration between researchers and practitioners can look like.

“Pull at people’s emotional heartstrings,” Levine said. “You can do it with certain forms of statistical evidence. You can do it with sympathetic case studies. And that will move behavior and move attitudes.”

Story Source:

Materials provided by Cornell University. Note: Content may be edited for style and length.

New gene hunt reveals potential breast cancer treatment target

Australian and US researchers have developed a way to discover elusive cancer-promoting genes, and have already identified one that appears to promote aggressive breast cancers.

The University of Queensland and Albert Einstein College of Medicine team has developed a statistical approach to reveal many previously hard-to-find genes that contribute to cancer.

Associate Professor Jess Mar, of the University of Queensland’s Australian Institute for Bioengineering and Nanotechnology, said the majority of ‘oncogenes’ identified to date showed up in most patients with a particular cancer type.

“When you average the data across those patients, those common oncogenes tend to stand out, but they don’t paint the full picture,” Dr Mar said.

“Even if a group of people all have the same type or even subtype of cancer, the molecular makeup of that cancer is different from person to person because the activity of genes vary between people,” she said.

“If an oncogene is over-active in one group of patients but inactive in another group, that’s statistically harder to see using the tools that we had available.

“If you only look at the average activity of a gene across the two groups, you’d never see the high activity in the first group.”

The Oncomix method enables researchers to ‘zoom in’ on genetic information from cancer patients and identify genes with two distinct ‘bumps’ of data — low activity in one group of patients but high activity in another.

“We’re acknowledging that there is diversity among cancer patients, but we’re still looking for trends in the data that pertain to groups of people,” Dr Mar said.

Dr Mar and her colleagues used Oncomix to examine breast cancer data from The Cancer Genome Atlas patient database.

They identified five genes that were over-active in a subset of breast cancer patients and followed up on the most promising target, known as CBX2.

“Previous studies have shown that most healthy female tissue has low levels of CBX2 activity, while an aggressive subtype of breast cancer has been shown to have high levels of CBX2 activity,” Dr Mar said.

“This suggested a possible link between CBX2 activity and breast cancer, but the nature of that link hadn’t been investigated.

“So we switched off the gene in a human breast cancer cell line and this slowed down the growth of those cancer cells, suggesting that CBX2 might promote tumour growth.”

Dr Mar said if further tests confirmed that CBX2 was an oncogene, it could be a potential therapeutic drug target for aggressive types of breast cancer.

“This discovery highlights the potential value of the Oncomix approach,” Dr Mar said.

“Identifying ‘hidden’ oncogenes that are unique to smaller groups of cancer patients will open up new therapeutic avenues and move us closer to personalised medicine.”

Oncomix is now a publicly-available, open source software tool, and the study is published in the British Journal of Cancer.

Story Source:

Materials provided by University of Queensland. Note: Content may be edited for style and length.