Neural networks blaze through wild animal identification and animation tasks

By admin In News, Technology No comments

Neural networks blaze through wild animal identification and animation tasks

Described as a significant advancement in the study and conservation of wildlife, the artificial intelligence technique has been made available as a software package known as Program R, a tool used for programming language.

“The ability to rapidly identify millions of images from camera traps can fundamentally change the way ecologists design and implement wildlife studies,” says the paper on the breakthrough, whose lead authors are PhD graduates, Michael Tabak and Ryan Miller from UW’s Department of Zoology and Physiology.

The UW study was built on research from earlier this year, in which a computer model analysed 3.2 million images captured by camera traps in the plains of Africa by a citizen-science project known as Snapshot Serengeti.

Using the deep-learning technique, study results showed categorised animal images at a 96.6 per cent accuracy rate. This result has the same accuracy as teams of human volunteers, but was achieved at a more rapid pace, according to the study.

In the university’s latest study, the researchers trained a deep neural network on Mount Moran, UW’s high-performance computer cluster, to classify wildlife species using 3.37 million camera-trap images of 27 species of animals obtained from five states across the US.

The model was then tested on approximately 375,000 animal images at a rate of about 2,000 images per minute on a laptop, achieving 97.6 per cent accuracy, which is seen as the highest accuracy to date in using machine learning for wildlife image classification.

Furthermore, the model was also tested on an independent subset of 5,900 images of moose, cattle, elk and wild pigs from Canada, revealing an accuracy rate of 81.8 per cent. Also, the technique was 94 per cent successful in removing what researchers would describe as “empty” images (photos without any animals) from a set of photographs taken in Tanzania.

The study was published in the scientific journal Methods in Ecology and Evolution.

Work on developing faster and more intelligent neural networks is ongoing around the world, for a wide variety of uses. The Japanese post-production company Imagica Group, animation and film studio OLM Digital and researchers from the Nara Institute of Science and Technology (NAIST) have jointly developed a technique for automatic colourisation in anime production.

To promote efficiency and automation in anime production, the research team focused on the possibility of automating the colourisation of trace images in the finishing process of anime production.

By integrating anime production technology and knowledge from Imagica and OLM Digital respectively with machine learning, computer graphics and vision technology from NAIST, the research team were able to develop the world’s first technique for automated colourisation of Japanese anime production.

After the trace image cleaning in a pre-processing step, automatic colourisation is performed according to the colour script of the character using a deep learning-based image segmentation algorithm, with the result of the colourisation being refined in a post-process step using voting techniques for each closed region.

The collaborative team aim to present this technique at Siggraph Asia 2018, an international conference on computer graphics and interactive technique, which takes place in Tokyo, Japan, at the start of December.

Last Friday, Cornell University researchers have been studying how aquatic animals leap out of the water and have used their niche knowledge to build simple robots inspired by these animals and their antics.

Also, in June, Nvidia researchers created a deep-learning system that allows scenes from standard video footage to be captured in slow-motion.

Described as a significant advancement in the study and conservation of wildlife, the artificial intelligence technique has been made available as a software package known as Program R, a tool used for programming language.

“The ability to rapidly identify millions of images from camera traps can fundamentally change the way ecologists design and implement wildlife studies,” says the paper on the breakthrough, whose lead authors are PhD graduates, Michael Tabak and Ryan Miller from UW’s Department of Zoology and Physiology.

The UW study was built on research from earlier this year, in which a computer model analysed 3.2 million images captured by camera traps in the plains of Africa by a citizen-science project known as Snapshot Serengeti.

Using the deep-learning technique, study results showed categorised animal images at a 96.6 per cent accuracy rate. This result has the same accuracy as teams of human volunteers, but was achieved at a more rapid pace, according to the study.

In the university’s latest study, the researchers trained a deep neural network on Mount Moran, UW’s high-performance computer cluster, to classify wildlife species using 3.37 million camera-trap images of 27 species of animals obtained from five states across the US.

The model was then tested on approximately 375,000 animal images at a rate of about 2,000 images per minute on a laptop, achieving 97.6 per cent accuracy, which is seen as the highest accuracy to date in using machine learning for wildlife image classification.

Furthermore, the model was also tested on an independent subset of 5,900 images of moose, cattle, elk and wild pigs from Canada, revealing an accuracy rate of 81.8 per cent. Also, the technique was 94 per cent successful in removing what researchers would describe as “empty” images (photos without any animals) from a set of photographs taken in Tanzania.

The study was published in the scientific journal Methods in Ecology and Evolution.

Work on developing faster and more intelligent neural networks is ongoing around the world, for a wide variety of uses. The Japanese post-production company Imagica Group, animation and film studio OLM Digital and researchers from the Nara Institute of Science and Technology (NAIST) have jointly developed a technique for automatic colourisation in anime production.

To promote efficiency and automation in anime production, the research team focused on the possibility of automating the colourisation of trace images in the finishing process of anime production.

By integrating anime production technology and knowledge from Imagica and OLM Digital respectively with machine learning, computer graphics and vision technology from NAIST, the research team were able to develop the world’s first technique for automated colourisation of Japanese anime production.

After the trace image cleaning in a pre-processing step, automatic colourisation is performed according to the colour script of the character using a deep learning-based image segmentation algorithm, with the result of the colourisation being refined in a post-process step using voting techniques for each closed region.

The collaborative team aim to present this technique at Siggraph Asia 2018, an international conference on computer graphics and interactive technique, which takes place in Tokyo, Japan, at the start of December.

Last Friday, Cornell University researchers have been studying how aquatic animals leap out of the water and have used their niche knowledge to build simple robots inspired by these animals and their antics.

Also, in June, Nvidia researchers created a deep-learning system that allows scenes from standard video footage to be captured in slow-motion.

Siobhan Doylehttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2018/11/neural-networks-blaze-through-wild-animal-identification-and-animation-tasks/

Powered by WPeMatico