How human social structures emerge

What rules shaped humanity’s original social networks? Researchers in Japan developed new mathematical models to understand what conditions produced traditional community structures and conventions around the world, including taboos about incest.

“We think this is the first time cultural anthropology and computer simulations have met in a single research study,” said Professor Kunihiko Kaneko, an expert in theoretical biology and physics from the University of Tokyo Research Center for Complex Systems Biology.

Researchers used statistical physics and computer models common in evolutionary biology to explain the origin of common community structures documented by cultural anthropologists around the world.

The earliest social networks were tightly knit cultural groups made of multiple biologically related families. That single group would then develop relationships with other cultural groups in their local area.

In the 1960s, cultural anthropologists documented social networks of indigenous communities and identified two kinship structures common around the world. In areas with hunter-gatherer communities, anthropologists documented direct-exchange kinship structures where women from two communities change places when they marry. In areas with agrarian farming communities, kinship structures of generalized exchange developed where women move between multiple communities to marry.

“Anthropologists have documented kinship structures all over the world, but it still remains unclear how those structures emerged and why they have common properties,” said Kenji Itao, a first year master’s degree student in Kaneko’s laboratory, whose interdisciplinary interests in physics, math and anthropology motivated this research study.

Experts in anthropology consider the incest taboo to be an extremely common social rule affecting kinship structures. The ancient incest taboo focused on social closeness, rather than genetic or blood relationships, meaning it was taboo to marry anyone born into the same cultural group.

Itao and Kaneko designed a mathematical model and computer simulation to test what external factors might cause generations of biologically related families to organize into communities with incest taboos and direct or generalized exchange of brides.

“Traditionally, it is more common for women to move to a new community when they marry, but we did not include any gender differences in this computer simulation,” explained Itao.

Simulated family groups with shared traits and desires naturally grouped together into distinct cultural groups. However, the traits the group possessed were different from the traits they desired in marriage partners, meaning they did not desire spouses similar to themselves. This is the underlying cause of the traditional community-based incest taboo suggested by the study.

When the computer simulation pushed communities to cooperate, generalized exchange kinship structures arose. The simulation demonstrated different kinship structures, including the direct exchange basic structure, emerge depending on the strength of conflict to find brides and the necessity of cooperation with specific other communities.

“It is rewarding to see that the combination of statistical physics and evolution theory, together with computer simulations, will be relevant to identify universal properties that affect human societies,” said Kaneko.

The current computer model is simple and only included factors of conflict and cooperation affecting marriage, but researchers hope to continue developing the model to also consider economic factors that might cause communities to separate into different classes. With these additions, the theory can hopefully be extended to explore different communities in the modern, global society.

“I would be glad if perhaps our results can give field anthropologists a hint about universal structures that might explain what they observe in new studies,” said Itao.

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Mathematicians put famous Battle of Britain ‘what if’ scenarios to the test

Mathematicians have used a statistical technique to interrogate some of the big “what if” questions in the Second World War battle for Britain’s skies.

What if the switch to bombing London had not occurred? What if a more eager Hitler had pushed for an earlier beginning to the campaign? What if Goring had focused on targeting British airfields throughout the entire period of the Battle?

These are just some of the alternative scenarios that have formed a long running debate among Second World War historians and enthusiasts over what might have affected the outcome of the battle, which took place between May and October 1940.

Mathematicians from the University of York have developed a new model to explore what the impact of changes to Luftwaffe tactics would actually have been. Their approach uses statistical modelling to calculate how the Battle might have played out if history had followed one of several alternative courses.

The researchers say that the method could now be used as a tool to investigate other historical controversies and unrealised possibilities, giving us a deeper understanding of events such as the Battle of the Atlantic (the longest continuous military campaign of the Second World War).

The statistical technique is called “weighted bootstrapping” and the computer simulation is a bit like taking a ball for the events of each day of the Battle of Britain and placing them in a lotto machine. Balls are then drawn, read and replaced to create thousands of alternative sets of days’ fighting, but in a different order, and perhaps with some days appearing more than once or not at all.

The researchers then repeated the process to test out the Battle “what ifs,” making some days more or less likely to be chosen, depending on how a protagonist (such as Hitler) would have changed their decisions had they been using different tactics.

Co-author of the paper, Dr Jamie Wood from the Department of Mathematics at the University of York, said: “The weighted bootstrap technique allowed us to model alternative campaigns in which the Luftwaffe prolongs or contracts the different phases of the battle and varies its targets.

“The Luftwaffe would only have been able to make the necessary bases in France available to launch an air attack on Britain in June at the earliest, so our alternative campaign brings forward the air campaign by three weeks. We tested the impact of this and the other counterfactuals by varying the probabilities with which we choose individual days.”

The results provide statistical backing to a change in tactics that several historians have argued could have brought the Luftwaffe victory in the summer of 1940: The simulations suggested that if they had started the campaign earlier and focused on bombing airfields, the RAF might have been defeated, paving the way for a German land invasion.

According to the mathematical model, the impact of these two changes would have been dramatic. Although it is impossible to estimate what the real statistical chances of an RAF victory were in July 1940, the study suggests that whatever Britain’s prospects, an earlier start and a focused targeting of airfields would have shifted the battle significantly in the Germans’ favour.

For example, had the likelihood of a British victory in the actual battle been 50%, these two tactical changes would have reduced it to less than 10%. If the real probability of British victory was 98%, the same changes would have reduced this to just 34%.

Co-author of the paper, Professor Niall Mackay from the Department of Mathematics at the University of York, said: “Weighted bootstrapping can provide a natural and intuitive tool for historians to investigate unrealised possibilities, informing historical controversies and debates.

“It demonstrates just how finely-balanced the outcomes of some of the biggest moments of history were. Even when we use the actual days’ events of the battle, make a small change of timing or emphasis to the arrangement of those days and things might have turned out very differently.

“This technique can be used to give us a more complete understanding of just how differently events might have played out.”

Story Source:

Materials provided by University of York. Note: Content may be edited for style and length.

An algorithm for large-scale genomic analysis

Haplotypes are a set of genetic variations that, located side by side on the same chromosome, are transmitted in a single group to the next generation. Their examination makes it possible to understand the heritability of certain complex traits, such as the risk of developing a disease. However, to carry out this analysis, genome analysis of family members (parents and their child) is usually necessary, a tedious and expensive process. To overcome this problem, researchers from the Universities of Geneva (UNIGE) and Lausanne (UNIL) and the SIB Swiss Institute of Bioinformatics have developed SHAPEIT4, a powerful computer algorithm that allows the haplotypes of hundreds of thousands of unrelated individuals to be identified very quickly. Results are as detailed as when family analysis is performed, a process that cannot be conducted on such a large scale. Their tool is now available online under an open source license, freely available to the entire research community. Details can be discovered in Nature Communications.

Nowadays, the analysis of genetic data is becoming increasingly important, particularly in the field of personalized medicine. The number of human genomes sequenced each year is growing exponentially and the largest databases account for more than one million individuals. This wealth of data is extremely valuable for better understanding the genetic destiny of humanity, whether to determine the genetic weight in a particular disease or to better understand the history of human migration. To be meaningful, however, these big data must be processed electronically. “However, the processing power of computers remains relatively stable, unlike the ultra-fast growth of genomic Big Data,” says Olivier Delaneau, SNSF professor in the Department of Computational Biology at UNIL Faculty of Biology and Medicine and at SIB, which led this work. “Our algorithm thus aims to optimize the processing of genetic data in order to absorb this amount of information and make it usable by scientists, despite the gap between its quantity and the comparatively limited power of computers.”

Better understand the role of haplotypes

Genotyping makes it possible to know an individual’s alleles, i.e. the genetic variations received from his or her parents. However, without knowing the parental genome, we do not know which alleles are simultaneously transmitted to children, and in which combinations. “This information — haplotypes — is crucial if we really want to understand the genetic basis of human variation, explains Emmanouil Dermitzakis, a professor at the Department of Genetic Medicine and Development at UNIGE Faculty of Medicine and SIB, who co-supervised this work. This is true for both population genetics or in the perspective of precision medicine.”

To determine the genetic risk of disease, for example, scientists assess whether a genetic variation is more or less present in individuals who have developed the disease in order to determine the role of this variation in the disease being studied. “By knowing the haplotypes, we conduct the same type of analysis, says Emmanouil Dermitzakis. However, we are moving from a single variant to a combination of many variants, which allows us to determine which allelic combinations on the same chromosome have the greatest impact on disease risk. It is much more accurate!”

The method developed by the researchers makes it possible to process an extremely large number of genomes, about 500,000 to 1,000,000 individuals, and to determine their haplotypes without knowing their ancestry or progeny, while using standard computing power. The SHAPEIT4 tool has been successfully tested on the 500,000 individual genomes present in the UK Biobank, a scientific database developed in the United Kingdom. “We have here a typical example of what Big Data is, says Olivier Delaneau. Such a large amount of data makes it possible to build very high-precision statistical models, as long as they can be interpreted without drowning in them.”

An open source license for transparency

The researchers have decided to make their tool accessible to all under an open source MIT license: the entire code is available and can be modified at will, according to the needs of researchers. This decision was made mainly for the sake of transparency and reproducibility, as well as to stimulate researchers from all over the world. “But we only give access to the analysis tool, under no circumstances to a corpus of data,” Olivier Delaneau explains. “It is then up to each individual to use it on the data he or she has.”

This tool is much more efficient than older tools, as well as faster and cheaper. It also makes it possible to limit the digital environmental impact. The very powerful computers used to process Big Data are indeed very energy-intensive; reducing their use also helps to minimize their negative impact.

Story Source:

Materials provided by Université de Genève. Note: Content may be edited for style and length.

Smaller class sizes not always better for pupils, multinational study shows

A new statistical analysis of data from a long-term study on the teaching of mathematics and science has found that smaller class sizes are not always associated with better pupil performance and achievement.

The precise effect of smaller class sizes can vary between countries, academic subjects, years, and different cognitive and non-cognitive skills, with many other factors likely playing a role. These findings are reported in a paper in Research Papers in Education.

Smaller class sizes in schools are generally seen as highly desirable, especially by parents. With smaller class sizes, teachers can more easily maintain control and give more attention to each pupil. As such, many countries limit the maximum size of a class, often at around 30 pupils.

But research into the effects of class size has generally proved inconclusive, with some studies finding benefits and some not. Furthermore, these studies have often been rather small scale, have tended to focus purely on reading and mathematics, and have not considered the effect of class size on non-cognitive skills such as interest and attentiveness.

To try to get a clearer picture, Professor Spyros Konstantopoulos and Ting Shen at Michigan State University, US, decided to analyze data produced by the Trends in International Mathematics and Science Study (TIMSS). Every four years since 1995, TIMSS has monitored the performance and achievement of fourth grade (age 9-10) and eighth grade (age 13-14) pupils from around 50 countries in mathematics and science. It records pupils’ academic ability in these subjects and their self-reported attitude and interest in them, and also contains information on class sizes.

To make the analysis more manageable, the researchers limited it to data from eighth grade pupils in four European countries — Hungary, Lithuania, Romania and Slovenia — collected in 2003, 2007 and 2011. They chose these four countries because they all mandate maximum class sizes, which would help to make the statistical analysis more reliable. Despite these limitations, the data still encompassed 4,277 pupils from 231 classes in 151 schools, making it much larger than most previous studies on class size. It was also the first study to investigate the effects of class size on both specific science subjects, comprising biology, chemistry, physics and earth science, and non-cognitive skills.

The analysis revealed that smaller class sizes were associated with benefits in Romania and Lithuania, but not in Hungary and Slovenia. The beneficial effects were most marked in Romania, where smaller classes were associated with greater academic achievement in mathematics, physics, chemistry and earth science, as well as greater enjoyment of learning mathematics. In Lithuania, however, smaller class sizes were mainly associated with improvements in non-cognitive skills such as greater enjoyment in learning biology and chemistry, rather than higher academic achievement in these subjects. The beneficial effects were also only seen in certain years.

“Most class size effects were not different than zero, which suggests that reducing class size does not automatically guarantee improvements in student performance,” said Professor Konstantopoulos. “Many other classroom processes and dynamics factor in and have to work well together to achieve successful outcomes in student learning.”

The researchers think smaller class sizes may have had greater beneficial effects on pupils in Romania and Lithuania than in Hungary and Slovenia because schools in Romania and Lithuania have fewer resources. “This finding is perhaps due to the fact that class size effects are more likely to be detected in countries with limited school resources where teacher quality is lower on average,” said Professor Konstantopoulos.

Story Source:

Materials provided by Taylor & Francis Group. Note: Content may be edited for style and length.

Computer game may help to predict reuse of opioids

A computer betting game can help predict the likelihood that someone recovering from opioid addiction will reuse the pain-relieving drugs, a new study shows.

The game, now being developed as an app, tests each patient’s comfort with risk-taking, producing mathematical scores called betas long used by economists to measure consumers’ willingness to try new products. The team then used a statistical test to see whether changes in risk-taking comfort tracked with opioid reuse, and found that people who placed higher-risk bets had higher beta scores.

When combined with other test scores that quiz a patient about recent drug use and desire to use drugs, or cravings, the study found that patients who showed sharp increases in their total beta scores were as much as 85 percent likely to reuse within the next week. By contrast, those whose beta scores did not undergo a spike were much less likely to reuse during treatment, usually a combination of talk therapy and drugs to wean patients off their opioid addiction.

Researchers say the findings, published in the Journal of the American Medical Association (JAMA) Psychiatry online Dec. 8, could lead to the design of clinical tools for tracking and reducing the number of patients who reuse opiates during treatment. More than 2 million Americans are estimated to have some form of opioid-use disorder.

According to the NYU Grossman School of Medicine researchers who led the new study, which is also being presented at the annual meeting of the American College of Neuropsychopharmacology in Orlando, Fla., a majority of patients reuse at some point during treatment, and more than half relapse within a year of undergoing therapy.

And while drug treatments with methadone, buprenorphine, and naloxone are highly effective in weaning patients off opioids, researchers say their impact has been constrained by a lack of good tools for measuring how well patients are responding to any treatment and for determining when treatment should be tailored (e.g., by increasing or decreasing drug doses) to prevent reuse.

Researchers say current techniques are insufficient and rely too much on cravings and on urine testing, which only provides such information after a patient has already reused.

“Our study shows that computer-based diagnostic tests may offer a useful new option,” says study senior investigator and neuroeconomist Paul Glimcher, PhD. “Ideally, clinicians would have several tools available for real-time monitoring of how well our patients are managing to free themselves from their addiction,” says Glimcher, a professor in the Neuroscience Institute and in the Department of Psychiatry at NYU Langone Health.

Glimcher notes that while many patients experience reuse “slip-ups” during therapy, they are not considered to have relapsed unless they fail to return and complete a standard, six-month treatment plan.

For the study, researchers recruited 70 men and women undergoing opioid addiction therapy at NYC Health + Hospitals/Bellevue. Each played the betting game regularly for seven months when they came in for weekly or monthly clinic visits. Their results were compared to the results for 50 other Bellevue patients of similar race, gender, and age who also played the game weekly but were never addicted to opiate drugs.

As part of the game, patients had the option of accepting a known risk, such as an immediate chip reward worth $5, or gambling on a “riskier” bag of chips with the possibility of either greater reward, as high as $66, or nothing. Some bags contained only two chips, leaving players with a 50 percent of winning, while others contained more chips, with players not always knowing their chances of winning. Risk scores were then plotted on a graph for tracking each patient’s willingness to take known or unknown risk. The game takes only a few minutes to complete.

Glimcher says patients typically demonstrate a pattern of “ups and downs” throughout treatment, with low beta scores when they feel in control of or even overconfident in their ability to resist any urge to reuse, but these scores then rise immediately before patients reuse, when they start “feeling lucky” and are willing to place higher-risk bets.

Once completed, Glimcher says his smartphone app based on the betting game could be used to provide daily monitoring of patients’ progress. Test results could be “networked” to a patient’s medical team and mental health support group, including close friends and family, to alert them when a patient is vulnerable and at greater risk of reuse.

Deep learning to analyze neurological problems

Getting to the doctor’s office for a check-up can be challenging for someone with a neurological disorder that impairs their movement, such as a stroke. But what if the patient could just take a video clip of their movements with a smart phone and forward the results to their doctor? Work by Dr Hardeep Ryait and colleagues at CCBN-University of Lethbridge in Alberta, Canada, publishing November 21 in the open-access journal PLOS Biology, shows how this might one day be possible.

Using rats that had incurred a stroke that affected the movement of their fore-limbs, the scientists first asked experts to score the rats’ degree of impairment based on how they reached for food. Then they input this information into a state-of-the-art deep neural network so that it could learn to score the rats’ reaching movements with human-expert accuracy. When the network was subsequently given video footage from a new set of rats reaching for food, it was then also able to score their impairments with similar human-like accuracy. The same program proved able to score other tests given to rats and mice, including tests of their ability to walk across a narrow beam and to pull a string to obtain a food reward.

Artificial neural networks are currently used to drive cars, to interpret video surveillance and to monitor and regulate traffic. This revolution in the use of artificial neural networks has encouraged behavioural neuroscientists to use such networks for scoring the complex behaviour of experimental subjects. Similarly, neurological disorders could also be assessed automatically, allowing quantification of behaviour as part of a check-up or to assess the effects of a drug treatment. This could help avoid the delay that can present a major roadblock to patient treatment.

Altogether, this research indicates that deep neural networks such as this can provide a reliable score for neurological assessment and can assist in designing behavioural metrics to diagnose and monitor neurological disorders. Interestingly, the results revealed that this network can use a wider range of information than that included by experts in a behavioural scoring system. A further distinct contribution of this research is that this network was able to identify features of the behaviour that are most indicative of motor impairments. This is important because this has the potential to improve monitoring the effects of rehabilitation. This method would aid standardization of the diagnosis and monitoring of neurological disorders, and in the future could be used by patients at home for monitoring of daily symptoms.

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

In classical and quantum secure communication practical randomness is incomplete

Random bit sequences are key ingredients of various tasks in modern life and especially in secure communication. In a new study researchers have determined that generating true random bit sequences, classical or quantum, is an impossible mission. Based on these findings, they have demonstrated a new method of classified secure communication.

The mathematical definition of a random bit sequence is so simple that it can be summarized in one sentence: A sequence of bits whose next bit is equal to 0 or 1 with equal probability, independent of previous ones. Although the definition is very simple, the practical certification of a process as random is much more complicated but crucial, for example, in secure communication, where information must be scrambled in order to prevent hackers from predicting a bit stream.

In an article to be published on November 5, 2019 in the journal Europhysics Letters, researchers at Bar-Ilan University demonstrate that long sequences with certified randomness by the US National Institute of Standard and Technology (NIST) are far from being truly random. Their work demonstrates that a large fraction of non-random bits can be systematically embedded in such bit sequences without negatively affecting their certified randomness. This discovery leads to a new type of classified secure communication between two parties where even the existence of the communication itself is concealed.

“The current scientific and technological viewpoint is that only non-deterministic physical processes can generate truly random bit sequences, which are conclusively verified by hundreds of very comprehensive statistical tests,” said the study’s lead author, Prof. Ido Kanter, of Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center. Kanter’s research group includes Shira Sardi, Herut Uzan, Shiri Otmazgin, Dr. Yaara Aviad and Prof. Michael Rosenbluh.

“We propose a reverse strategy, which has never been tested before. Our strategy aims to quantify the maximal amount of information that can be systematically embedded in a certified random bit sequence, without harming its certification,” said PhD students Shira Sardi and Herut Uzan, the key contributors to the research.

Using such a strategy, the level of randomness can be quantified beyond the binary certification. In addition, since the information is systematically embedded in the bit sequence, the approach offers a new cryptosystem, similar to steganography, where the existence of any communication is completely concealed.

“According to the fundamental principles of quantum physics, the randomness of quantum random bit generators is expected to be perfect. In practice, however, this perfect quantum randomness may be diminished by many experimental imperfections, said Prof. Kanter. “Hence, a sequence generated by a quantum number generator ultimately has to be certified by statistical tests which can differentiate between original quantum guaranteed sequences and spurious ones. However, the newly-discovered incompleteness of practical randomness is expected to disrupt even quantum random number generators.”

The new viewpoint presented in this work calls for a reevaluation of the quantified definition of measuring classical and quantum randomness, as well as its application to secure communication.

Story Source:

Materials provided by Bar-Ilan University. Note: Content may be edited for style and length.

Information theory as a forensics tool for investigating climate mysteries

During Earth’s last glacial period, temperatures on the planet periodically spiked dramatically and rapidly. Data in layers of ice of Greenland and Antarctica show that these warming events — called Dansgaard-Oeschger and Antarctic Isotope Maximum events — occurred at least 25 times. Each time, in a matter of decades, temperatures climbed 5-10 degrees Celsius, then cooled again, gradually. While there remain several competing theories for the still-unexplained mechanisms behind these spikes, a new paper in the journal Chaos suggests that mathematics from information theory could offer a powerful tool for analyzing and understanding them.

“In many systems, before an extreme event, information dynamics become disordered,” says Joshua Garland, a postdoctoral fellow at the Santa Fe Institute and lead author on the new paper. For instance, information theoretic tools have been used to anticipate seizure events from disturbances in EEG readings.

Initially, the authors anticipated they would see a signal — a destabilization in the climate record similar to those seen in pre-seizure EEGs — just before the warming events. But those signals never appeared. “Around these events, you have the same amount of information production,” says Garland. And this, suggest the authors, indicates that Dansgaard-Oeschger and Antarctic Isotope Maximum events were most likely regular and predictable patterns of the climate of the last glacial period rather than the results of unexpected events.

In addition, information theory could improve how scientists calculate accumulation — how much snow fell in any given year. “It’s very challenging. Many people are working on this, and they are using sophisticated math, combined with expert knowledge and known features, to figure out the accumulation,” says Garland. Currently, fine pollen signatures are some of the best differentiators between years in ice that is tens of thousands of years old, compressed under the weight of each subsequent snowfall. Information theory, and specifically a statistical approach called permutation entropy, offers a different approach. “This could be a fast and efficient tool for the experts to corroborate their work,” says Garland.

“When you’re dealing with a timeseries, you want to know what meaningful information is present. You want to extract it and use it, and to not use information that isn’t useful,” says Garland. “We hope this tool can help scientists do this with ancient climate records.”

Information theory is already being used to identify anomalies in the climate record — particularly, to flag anomalies introduced during the collection and observation of the ice cores.

This paper follows on the heels of two related studies published in Entropy and Advances in Intelligent Data Analysis XV.

“These information-theoretic calculations are not only useful for revealing hidden problems with the data, but also potentially powerful in suggesting new and sometimes surprising geoscience,” write the authors in the new paper.

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.

Combination of techniques could improve security for IoT devices

A multi-pronged data analysis approach that can strengthen the security of Internet of Things (IoT) devices — such as smart TVs, home video cameras and baby monitors — against current risks and threats has created by a team of Penn State World Campus students pursuing master of professional studies degrees in information sciences.

“By 2020, more than 20 billion IoT devices will be in operation, and these devices can leave people vulnerable to security breaches that can put their personal data at risk or worse, affect their safety,” said Beulah Samuel, a student in the Penn State World Campus information sciences and technology program. “Yet no strategy exists to identify when and where a network security attack on these devices is taking place and what such an attack even looks like.”

The team applied a combination of approaches often used in traditional network security management to an IoT network simulated by the University of New South Wales Canberra. Specifically, they showed how statistical data, machine learning and other data analysis methods could be applied to assure the security of IoT systems across their lifecycle. They then used intrusion detection and a visualization tool, to determine whether or not an attack had already occurred or was in progress within that network.

The researchers describe their approach and findings in a paper to be presented today (Oct. 10) at the 2019 IEEE Ubiquitous Computing, Electronics and Mobile Communication Conference. The team received the “Best Paper” award for their work.

One of the data analysis techniques the team applied was the open-source freely available R statistical suite, which they used to characterize the IoT systems in use on the Canberra network. In addition, they used machine learning solutions to search for patterns in the data that were not apparent using R.

“One of the challenges in maintaining security for IoT networks is simply identifying all the devices that are operating on the network,” said John Haller, a student in the Penn State World Campus information sciences and technology program. “Statistical programs, like R, can characterize and identify the user agents.”

The researchers used the widely available Splunk intrusion detection tool, which comprises software for searching, monitoring and analyzing network traffic, via a Web-style interface.

“Splunk is an analytical tool that is often used in traditional network traffic monitoring, but had only seen limited application to IoT traffic, until now,” said Melanie Seekins.

Using these tools, and others, the team identified three IP addresses that were actively trying to break into the Canberra network’s devices.

“We observed three IP addresses attempting to attach to the IoT devices multiple times over a period of time using different protocols,” said Andrew Brandon. “This clearly indicates a Distributed Denial of Service attack, which aims to disrupt and/or render devices unavailable to the owners.”

As the basis for their approach, the researchers compared it to a common framework used to help manage risk, the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF).

“The NIST RMF was not created for IoT systems, but it provides a framework that organizations can use to tailor, test, and monitor implemented security controls. This lends credibility to our approach,” said Brandon.

Ultimately, Seekins said, the ability to analyze IoT data using the team’s approach may enable security professionals to identify and manage controls to mitigate risk and analyze incidents as they occur.

“Knowing what has taken place in an actual attack helps us write scripts and monitors to look for those patterns,” she said. “These predictive patterns and the use of machine learning and artificial intelligence can help us anticipate and prepare for major attacks using IoT devices.”

The team hopes their approach will contribute to the creation of a standard protocol for IoT network security.

“There is no standardization for IoT security,” said Seekins. “Each manufacturer or vendor creates their own idea of what security looks like, and this can become proprietary and may or may not work with other devices. Our strategy is a good first step toward alleviating this problem.”

Story Source:

Materials provided by Penn State. Note: Content may be edited for style and length.

New method improves measurement of animal behavior using deep learning

A new toolkit goes beyond existing machine learning methods by measuring body posture in animals with high speed and accuracy. Developed by researchers from the Centre for the Advanced Study of Collective Behaviour at the University of Konstanz and the Max Planck Institute of Animal Behavior, this deep learning toolkit, called DeepPoseKit, combines previous methods for pose estimation with state-of-the-art developments in computer science. These newly-developed deep learning methods can correctly measure body posture from previously-unseen images after being trained with only 100 examples and can be applied to study wild animals in challenging field settings. Published today in the open access journal eLife, the study is advancing the field of animal behaviour with next-generation tools while at the same time providing an accessible system for non-experts to easily apply machine learning to their behavioural research.

Animals must interact with the physical world in order to survive and reproduce, and studying their behaviour can reveal the solutions that have evolved for achieving these ultimate goals. Yet behaviour is hard to define just by observing it directly: biases and limited processing power of human observers inhibits the quality and resolution of behavioural data that can be collected from animals.

Machine learning has changed that. Various tools have been developed in recent years that allow researchers to automatically track the locations of animals’ body parts directly from images or videos — without the need for applying intrusive markers on animals or manually scoring behaviour. These methods, however, have shortcomings that limit performance. “Existing tools for measuring body posture with deep learning were either slower and more accurate or faster and less accurate — but we wanted to achieve the best of both worlds.” says lead author Jake Graving, a graduate student in the Max Planck Institute of Animal Behavior.

In the new study, researchers present an approach that overcomes this speed-accuracy trade-off. These new methods use an efficient, state-of-the-art deep learning model to detect body parts in images, and a fast algorithm for calculating the location of these detected body parts with high accuracy. Results from this study also demonstrate that these new methods can be applied across species and experimental conditions — from flies, locusts, and mice in controlled laboratory settings to herds of zebras interacting in the wild. Dr. Blair Costelloe, co-author of the paper, who studies zebras in Kenya says: “The posture data we can now collect for the zebras using DeepPoseKit allows us to know exactly what each individual is doing in the group and how they interact with the surrounding environment. In contrast, existing technologies like GPS will reduce this complexity down to a single point in space, which limits the types of questions you can answer.”

Due to its high performance and easy-to-use software interface (the code is publicly available on Github, https://github.com/jgraving/deepposekit), the researchers say that DeepPoseKit can immediately benefit scientists across a variety of fields — such as neuroscience, psychology, and ecology — and levels of expertise. Work on this topic can also have applications that affect our daily lives, such as improving similar algorithms for gesture recognition used on smartphones or diagnosing and monitoring movement-related diseases in humans and animals.

“In just a few short years deep learning has gone from being a sort of niche, hard-to-use method to one of the most democratized and widely-used software tools in the world,” says Iain Couzin, senior author on the paper who leads the Centre for the Advanced Study of Collective Behaviour at the University of Konstanz and the Department of Collective Behaviour at the Max Planck Institute of Animal Behavior. “Our hope is that we can contribute to behavioural research by developing easy-to-use, high-performance tools that anybody can use.” Tools like these are important for studying behaviour because, as Graving puts it: “They allow us to start with first principles, or ‘how is the animal moving its body through space?’, rather than subjective definitions of what constitutes a behaviour. From there we can begin to apply mathematical models to the data and develop general theories that help us to better understand how individuals and groups of animals adaptively organize their behaviour.”

Story Source:

Materials provided by University of Konstanz. Note: Content may be edited for style and length.