DCMS committee calls for action on social media’s propagation of disinformation

By admin In News, Technology No comments

DCMS committee calls for action on social media’s propagation of disinformation

It’s time to stop treating social-media products as entirely neutral platforms and make them act more in line with the role of publisher. That’s one of the main conclusions of the the House of Commons committee for Digital, Culture, Media and Sport.

The committee published its report on Monday (18 February 2019) into disinformation and fake news in the wake of multiple scandals both in the UK and US where state actors and other well-funded interests skewed the information people see in their social-media applications. Some of the problems stem from the companies putting growth above all other considerations when determining when and how to use personal data and in choosing the kinds of content they promote. Although the companies like to argue they are simply vehicles for others’ content, they have to a large extent compromised their own defence by working against their users’ interests.

Although they do not see the social-media companies as being the same as publishers, the committee’s members argue a new category of technology company needs to be created that is halfway between being the provider of a platform, such as a telecom network, and that of a traditional publisher. Such a company would be forced to remove “harmful” information once identified.

However, a key question remains: what constitutes “harmful”? This is probably going to be central to social-media companies’ defence of the status quo. We can expect these highly profitable organisations to plead poverty when confronted with the problem of choking off disinformation as they have done over the EU promising to tighten up copyright protections. Both are potentially costly exercises for these companies.

Some aspects of disinformation are relatively straightforward and its seem almost bizarre that social-media companies have proven unable to deal with them, though much of it seems to stem from unwillingness. One example is the type of fake ad that plagues the likes of Martin Lewis and the judges on Dragons’ Den. Their faces and names are used aggressively by a variety of scams, ranging from food supplements to cryptocurrency cash-ins, in ads dressed up to look like news stories.

Magazine publishers mostly follow the code of practice and it is one that works reasonably well for the most part. Online advertising companies do not bother, often arguing that their algorithms are unable to spot the problem upfront and rely on post-facto reporting. Even Facebook’s upcoming tool, created only after Lewis took them them to court, will not identify problematic ads upfront. Given that most of these ads are used on celebrities and social-media companies have access to reasonable face-recognition AI software, a tool that flags ads with those faces for special attention might be a good idea. However, it is currently unclear as to whether the tool will be entirely complaint driven or will have some recognition functions built into it. Google and other advertising platforms, which are distributing these ads to many websites, have taken no action other than pointing to existing complaint forms that assume the subjects in the ad will have seen them. That’s not easy in a world of micro-targeted campaigns.

For other forms of disinformation – beyond obvious hate speech and encouragement to suicide – automated identification is likely to be more problematic. This will doubtless lead to even greater foot-dragging from the social-media companies. Much of the problem is that judging harm is not just way beyond the power of algorithms, it’s not all that easy for humans. Text, video and images are often difficult to classify. As US Supreme Court judge Potter Stewart remarked in a landmark case over a Louis Malle movie in 1964: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”] and perhaps I could never succeed in intelligibly doing so. But I know it when I see it and the motion picture involved in this case is not that.”

Determining the level of harm for content will continue to be extremely difficult to adjudicate. As a result, social-media companies are likely to find that their headcount is going to grow significantly. Their cost of doing business will increase in response to their own success in embracing Metcalfe’s Law of social scale and they may find the days of easy profit through automation receding into the past.

Also under the scrutiny of the committee was a less obvious aspect of disinformation campaigns: their use of micro-targeting. Again, this will not be easy for organisations such as the Information Commissioner’s Office (ICO) to police because it will rely on access to algorithms and an understanding of what they achieve. In an era of machine learning, even the creators may not know what the algorithms really do.

However, the ICO asked for the protection and the DCMS committee agreed that “protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual”. This would crack down on the use of triangulation of seemingly anonymous data to identify users’ preferences to target campaigns. Potentially, political campaigns would, for once, be subject to greater scrutiny than their commercial counterparts.

Funding an expanded remit for the ICO could come through a levy on the social media companies themselves – something that will doubtless lead to intense lobbying activity. Despite the difficulties, a crackdown on social media companies is now inevitable, although effective legislation is going to be exceedingly difficult to develop.

It’s time to stop treating social-media products as entirely neutral platforms and make them act more in line with the role of publisher. That’s one of the main conclusions of the the House of Commons committee for Digital, Culture, Media and Sport.

The committee published its report on Monday (18 February 2019) into disinformation and fake news in the wake of multiple scandals both in the UK and US where state actors and other well-funded interests skewed the information people see in their social-media applications. Some of the problems stem from the companies putting growth above all other considerations when determining when and how to use personal data and in choosing the kinds of content they promote. Although the companies like to argue they are simply vehicles for others’ content, they have to a large extent compromised their own defence by working against their users’ interests.

Although they do not see the social-media companies as being the same as publishers, the committee’s members argue a new category of technology company needs to be created that is halfway between being the provider of a platform, such as a telecom network, and that of a traditional publisher. Such a company would be forced to remove “harmful” information once identified.

However, a key question remains: what constitutes “harmful”? This is probably going to be central to social-media companies’ defence of the status quo. We can expect these highly profitable organisations to plead poverty when confronted with the problem of choking off disinformation as they have done over the EU promising to tighten up copyright protections. Both are potentially costly exercises for these companies.

Some aspects of disinformation are relatively straightforward and its seem almost bizarre that social-media companies have proven unable to deal with them, though much of it seems to stem from unwillingness. One example is the type of fake ad that plagues the likes of Martin Lewis and the judges on Dragons’ Den. Their faces and names are used aggressively by a variety of scams, ranging from food supplements to cryptocurrency cash-ins, in ads dressed up to look like news stories.

Magazine publishers mostly follow the code of practice and it is one that works reasonably well for the most part. Online advertising companies do not bother, often arguing that their algorithms are unable to spot the problem upfront and rely on post-facto reporting. Even Facebook’s upcoming tool, created only after Lewis took them them to court, will not identify problematic ads upfront. Given that most of these ads are used on celebrities and social-media companies have access to reasonable face-recognition AI software, a tool that flags ads with those faces for special attention might be a good idea. However, it is currently unclear as to whether the tool will be entirely complaint driven or will have some recognition functions built into it. Google and other advertising platforms, which are distributing these ads to many websites, have taken no action other than pointing to existing complaint forms that assume the subjects in the ad will have seen them. That’s not easy in a world of micro-targeted campaigns.

For other forms of disinformation – beyond obvious hate speech and encouragement to suicide – automated identification is likely to be more problematic. This will doubtless lead to even greater foot-dragging from the social-media companies. Much of the problem is that judging harm is not just way beyond the power of algorithms, it’s not all that easy for humans. Text, video and images are often difficult to classify. As US Supreme Court judge Potter Stewart remarked in a landmark case over a Louis Malle movie in 1964: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”] and perhaps I could never succeed in intelligibly doing so. But I know it when I see it and the motion picture involved in this case is not that.”

Determining the level of harm for content will continue to be extremely difficult to adjudicate. As a result, social-media companies are likely to find that their headcount is going to grow significantly. Their cost of doing business will increase in response to their own success in embracing Metcalfe’s Law of social scale and they may find the days of easy profit through automation receding into the past.

Also under the scrutiny of the committee was a less obvious aspect of disinformation campaigns: their use of micro-targeting. Again, this will not be easy for organisations such as the Information Commissioner’s Office (ICO) to police because it will rely on access to algorithms and an understanding of what they achieve. In an era of machine learning, even the creators may not know what the algorithms really do.

However, the ICO asked for the protection and the DCMS committee agreed that “protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual”. This would crack down on the use of triangulation of seemingly anonymous data to identify users’ preferences to target campaigns. Potentially, political campaigns would, for once, be subject to greater scrutiny than their commercial counterparts.

Funding an expanded remit for the ICO could come through a levy on the social media companies themselves – something that will doubtless lead to intense lobbying activity. Despite the difficulties, a crackdown on social media companies is now inevitable, although effective legislation is going to be exceedingly difficult to develop.

Chris Edwardshttps://eandt.theiet.org/rss

E&T News

https://eandt.theiet.org/content/articles/2019/02/dcms-committee-calls-for-action-on-social-media-s-propagation-of-disinformation/

Powered by WPeMatico