The White House has just passed an executive order on the “Use of Trustworthy Artificial Intelligence in Government”. Aside from the unsubstantiated assumption of the government's own trustworthiness and the fact that the software is a trust issue, the order is almost entirely hot air.
The PO is like others in that it restricts itself to what a president can force federal agencies to do – and that is not very much in practice. This “instructs the federal authorities to be guided by nine principles”, which reveal the extent of the effects precisely there. Please, agencies – let yourself be guided!
And then of course all military and national security activities are excluded, where AI systems are most dangerous and surveillance is most important. Nobody's worried about what NOAA is doing with AI – but they are very concerned about what three-letter agencies and the Pentagon are up to. (They have their own self-imposed rules.)
The principles are like a wish list. The AI used by the government must be:
lawful; goal-oriented and performance-oriented; accurate, reliable and effective; safe, secure and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable.
I would challenge anyone to find any significant use of AI anywhere in the world, which is all of these things. Every agency claims that an AI or machine learning system that uses it conforms to all of these principles as outlined in the EO and should be treated with the utmost skepticism.
It's not that the principles themselves are bad or pointless – it is certainly important that an agency be able to quantify the risks when considering using AI for something and that a process is in place to monitor their effects is available. However, an implementing regulation does not achieve this. Strong laws, likely starting at the city and state levels, have already shown what it means to call for AI accountability, and while federal law is unlikely to come into effect anytime soon, it is no substitute for a full-blown bill. It's just too hand-curled on just about anything. In addition, many agencies adopted "principles" like these years ago.
The only thing the EO actually does is force each agency to make a list of all the uses for which they use AI, however defined. Of course, it will be more than a year before we see this.
Agencies will select the format for this AI inventory within 60 days of ordering. The inventory must be completed 180 days later; 120 days after this, the inventory must be completed and checked for compliance with the principles. Agencies must “strive for” plans to bring the systems in line within 180 more days; In the meantime, inventories must be shared with other agencies within 60 days of closing. They must then be made available to the public (minus all elements sensitive to law enforcement, national security, etc.) within 120 days of completion.
In theory we could have these inventories in a month, but in practice we're looking at about a year and a half. At this point, we have a snapshot of the AI tools from the previous management, with all the juicy bits taken out at their discretion. Still, it could make for interesting read, depending on what exactly is in it.
This executive order, like others of its kind, is an attempt by this White House to be an active leader in something that is almost entirely out of their hands. AI should definitely be developed and deployed according to common principles, but even if those principles could be laid down top-down, it has to be that loose, slightly binding gesture that leads some agencies to solve small problems. swear thinking really hard about her is not the right way to do this.