When it comes to artificial intelligence (AI), the Cybersecurity and Infrastructure Security Agency (CISA) has spoken:
These systems need to be an open book.
What does CISA say about AI?
AI technologies have rapidly increased and improved in recent years, easing the job of humans through algorithms and machine learning.
But that speed also brings questions, particularly concerning AI's relationship with cybersecurity.
According to CISA, artifical intelligence should not be shielded by intellectual property claims, and will need to disclose design elements in order to foster accountability for these devices and their security.
Here's what Martin Stanley, a senior technical advisor who leads the development of CISA's AI strategy, told Nextgov about the stance:
"I don’t know how you can have a black-box algorithm that's proprietary and then be able to deploy it and be able to go off and explain what's going on.
I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for what’s happening."
According to CISA, a lack of accountability can have serious consequences for these devices. While AI originates from code written by humans, these systems gradually evolve and develop as they consume data.
If the data is manipulated, or "poisoned," the outcomes can be disastrous.
And that "poisoned" data can lead to security vulnerabilities when left unchecked.
"We've seen... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell," said Josephine Wolff, a Tufts University cybersecurity professor.
"With AI, there's much more potential for vulnerabilities to stay covert than with other threat vectors. As models become increasingly complex it can take longer to realize that something is wrong before there’s a dramatic outcome."
Mistakes in AI: a common trend
This is more common than we realize, though we often fail to make the security vulnerability connection.
An AI with "poisoned" or biased data can easily make mistakes and reveal problems like racial bias.
Only recently, an AI bot journalist mistakenly used the wrong photo of a mixed-race musician in a news article. SecureWorld covered the story:
It is also worth acknowledging that these incidents are not the fault of the artificial intelligence systems that make them. A computer is only as good as the data it receives.
But when a machine receives biased data, it learns from that bias and compounds upon it. Solving a degree of prejudice that ingrained is not unlike the history of racism in American itself.
And remember, the data that AI receives comes from this society: from IT employees, from the internet, and from culture.
Be it security vulnerabilities or racial bias, AI systems are far from perfect.
And CISA is seeking accountability for that imperfection.