Google’s Jigsaw tool helps journalists flag doctored images

By admin In News, Technology No comments

Google’s Jigsaw tool helps journalists flag doctored images

The platform, called Assembler, blends multiple image detection models into a single tool that can identify various forms of manipulation.

These models were provided by academics from the University of Maryland; University Federico II of Naples in Italy, and the University of California, Berkeley. Jigsaw also collaborated with Google Research to create the tool.

Work on the platform began in 2016, according to a blog post by Jared Cohen, CEO and founder of Jigsaw. “Together with Google Research and academic partners, we developed an experimental platform called Assembler to test how technology can help fact-checkers and journalists identify and analyse manipulated media,” he wrote.

According to Cohen, Assembler’s detectors can spot specific types of manipulation, such as copy-paste and image brightness adjustments. The tool scans the image for manipulations, shows where there may have been applied, as well as the probability of these manipulations.

In the blog post, Cohen described Assembler as an “early-stage experimental platform”, meaning it’s not yet available to everyone. However, the firm has worked with various fact-checking platforms – including Agence France-Presse; Animal Politico; Code for Africa; Les Décodeurs du Monde, and Rappler – to test how the tool can be used by journalists and in newsrooms.

Jigsaw has also built two of its own detectors to use in Assembler. The first is a synthetic media detector that uses machine learning to identify deepfakes – images that look real but have actually been manipulated by artificial intelligence (AI).

Assembler has seven detectors which work on an image to look for specific types of manipulation, from Photoshop to general adversarial (GANs) which are used in creating deepfakes.

Cohen described the first tool in more detail in the post: “The first is the StyleGAN detector to specifically address deepfakes. This detector uses machine learning to differentiate between images of real people from deepfake images produced by the StyleGAN deepfake architecture.”

The second creation is an ensemble model, which was trained using signals from multiple detectors that simultaneously search for various forms of manipulation. Cohen claimed: “Because the ensemble model can identify multiple image manipulation types, the results are, on average, more accurate than any individual detector.”

In September 2019, Google made a library of thousands of AI-manipulated videos publicly accessible, in hopes that researchers will use it to develop tools for detecting deceitful content.

In 2018, Adobe – the company whose Photoshop software is synonymous with image manipulation – developed a neural network capable of identifying regions of images that have been altered.

The platform, called Assembler, blends multiple image detection models into a single tool that can identify various forms of manipulation.

These models were provided by academics from the University of Maryland; University Federico II of Naples in Italy, and the University of California, Berkeley. Jigsaw also collaborated with Google Research to create the tool.

Work on the platform began in 2016, according to a blog post by Jared Cohen, CEO and founder of Jigsaw. “Together with Google Research and academic partners, we developed an experimental platform called Assembler to test how technology can help fact-checkers and journalists identify and analyse manipulated media,” he wrote.

According to Cohen, Assembler’s detectors can spot specific types of manipulation, such as copy-paste and image brightness adjustments. The tool scans the image for manipulations, shows where there may have been applied, as well as the probability of these manipulations.

In the blog post, Cohen described Assembler as an “early-stage experimental platform”, meaning it’s not yet available to everyone. However, the firm has worked with various fact-checking platforms – including Agence France-Presse; Animal Politico; Code for Africa; Les Décodeurs du Monde, and Rappler – to test how the tool can be used by journalists and in newsrooms.

Jigsaw has also built two of its own detectors to use in Assembler. The first is a synthetic media detector that uses machine learning to identify deepfakes – images that look real but have actually been manipulated by artificial intelligence (AI).

Assembler has seven detectors which work on an image to look for specific types of manipulation, from Photoshop to general adversarial (GANs) which are used in creating deepfakes.

Cohen described the first tool in more detail in the post: “The first is the StyleGAN detector to specifically address deepfakes. This detector uses machine learning to differentiate between images of real people from deepfake images produced by the StyleGAN deepfake architecture.”

The second creation is an ensemble model, which was trained using signals from multiple detectors that simultaneously search for various forms of manipulation. Cohen claimed: “Because the ensemble model can identify multiple image manipulation types, the results are, on average, more accurate than any individual detector.”

In September 2019, Google made a library of thousands of AI-manipulated videos publicly accessible, in hopes that researchers will use it to develop tools for detecting deceitful content.

In 2018, Adobe – the company whose Photoshop software is synonymous with image manipulation – developed a neural network capable of identifying regions of images that have been altered.

E&T editorial staffhttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2020/02/google-s-jigsaw-tool-helps-journalists-flag-doctored-images/

Powered by WPeMatico