Government bodies not ‘sufficiently transparent’ about AI
Government bodies not ‘sufficiently transparent’ about AI

The review [PDF] focused on how public bodies could continue to uphold the “Nolan Principles” of public life as AI becomes increasingly integrated into public services. It was chaired by crossbench peer Lord Jonathan Evans.
“On the issues of transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation,” Evans wrote in a letter to the Prime Minister. “We conclude that the UK does not need a specific AI regulator, but all regulators must adapt to the challenges that AI poses to their specific sectors.”
According to the review, AI – when responsibly implemented – promises improved public standards, although it poses a challenge to the principles of accountability, objectivity, and openness.
The authors wrote that AI risks obscuring the chain of accountability in organisations by undermining attribution of responsibility for decisions made by officials. However, they said fears that AI is a black box may be overstated, and that “explainable” AI is a realistic goal.
They added that the government is already failing to be fully open about its use of AI: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where the machine learnings is currently being used in government.”
They also said the prevalence of data bias threatens objectivity by embedding and amplifying discrimination in the public sector; this is an “issue of serious concern” with more work needed to mitigate the impact of bias.
The review states there are “significant deficiencies” in ensuring that AI is being used ethically in government and other public organisations, with public AI institutions having only been established recently, and no core set of ethical principles.
Despite these concerns, it does not recommend establishing a new regulator for AI, acknowledging that the Centre for Data Ethics and Innovation (CDEI) and other bodies should be sufficient for informing and enforcing a regulatory framework which sets clear legal boundaries on use of AI in the public sector, particularly with regards to transparency and data bias.
Evans and his peers also recommend that the government establishes an authoritative set of ethical principles for AI and that public bodies should get assistance when procuring technologies to ensure that they are compliant with public standards. They said the application of anti-discrimination law in AI must be clarified and that the CDEI should advise existing regulators on how to adapt to new technologies.
“The Nolan Principles remain a valid guide for public sector practice in the age of AI. However, this new technology is a fast-moving field, so government and regulators will need to act swiftly to keep up with the pace of innovation,” they wrote.
According to Iain Brown, SAS regional head of data science, the government should prioritise responsible and ethical AI in order to build public trust in applications of the technology.
“Making decisions with the help of intelligent machines is one of the most progressive steps the public sector has ever taken. Yet, on this mission to reach a new level of quality in public services, responsible and ethical use of AI should be the priority,” he commented. “SAS research suggests that, whilst those working with AI are enthusiastic about its potential, the greatest barrier to this potential comes from concerns over trust. Maintaining the trust of the public, through complying with the new recommendations and being proactive in offering visibility over how AI is used, is essential if the technology is to become mainstream in the public sector. This includes informing the public on how data is collected and ensuring that decisions made can be justified.”

The review [PDF] focused on how public bodies could continue to uphold the “Nolan Principles” of public life as AI becomes increasingly integrated into public services. It was chaired by crossbench peer Lord Jonathan Evans.
“On the issues of transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation,” Evans wrote in a letter to the Prime Minister. “We conclude that the UK does not need a specific AI regulator, but all regulators must adapt to the challenges that AI poses to their specific sectors.”
According to the review, AI – when responsibly implemented – promises improved public standards, although it poses a challenge to the principles of accountability, objectivity, and openness.
The authors wrote that AI risks obscuring the chain of accountability in organisations by undermining attribution of responsibility for decisions made by officials. However, they said fears that AI is a black box may be overstated, and that “explainable” AI is a realistic goal.
They added that the government is already failing to be fully open about its use of AI: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where the machine learnings is currently being used in government.”
They also said the prevalence of data bias threatens objectivity by embedding and amplifying discrimination in the public sector; this is an “issue of serious concern” with more work needed to mitigate the impact of bias.
The review states there are “significant deficiencies” in ensuring that AI is being used ethically in government and other public organisations, with public AI institutions having only been established recently, and no core set of ethical principles.
Despite these concerns, it does not recommend establishing a new regulator for AI, acknowledging that the Centre for Data Ethics and Innovation (CDEI) and other bodies should be sufficient for informing and enforcing a regulatory framework which sets clear legal boundaries on use of AI in the public sector, particularly with regards to transparency and data bias.
Evans and his peers also recommend that the government establishes an authoritative set of ethical principles for AI and that public bodies should get assistance when procuring technologies to ensure that they are compliant with public standards. They said the application of anti-discrimination law in AI must be clarified and that the CDEI should advise existing regulators on how to adapt to new technologies.
“The Nolan Principles remain a valid guide for public sector practice in the age of AI. However, this new technology is a fast-moving field, so government and regulators will need to act swiftly to keep up with the pace of innovation,” they wrote.
According to Iain Brown, SAS regional head of data science, the government should prioritise responsible and ethical AI in order to build public trust in applications of the technology.
“Making decisions with the help of intelligent machines is one of the most progressive steps the public sector has ever taken. Yet, on this mission to reach a new level of quality in public services, responsible and ethical use of AI should be the priority,” he commented. “SAS research suggests that, whilst those working with AI are enthusiastic about its potential, the greatest barrier to this potential comes from concerns over trust. Maintaining the trust of the public, through complying with the new recommendations and being proactive in offering visibility over how AI is used, is essential if the technology is to become mainstream in the public sector. This includes informing the public on how data is collected and ensuring that decisions made can be justified.”
E&T editorial staffhttps://eandt.theiet.org/rss
https://eandt.theiet.org/content/articles/2020/02/government-bodies-not-sufficiently-transparent-about-ai/
Powered by WPeMatico