
By John P. Desmond, AI Trends Editor
Inside the federal government, AI developers are following the methods of AI accountability and two experiences were pursued. AI World Government This week was practically and personally held in Alexandria, VA.

Tika Ariga, Chief Data Scientist and Director in the United States Office of official accountability, Described an AI accountability framework that he uses in his agency and plans to make it available to others.
And the Chief Strategist of Bris Goodman, AI and Machine Learning Defensive innovation unit (DIU), an unit of the Department of Defense, which is based to help the US Army to use the emerging trade technologies, has described the work in its unit to apply the principles of AI development to the terms, which an engineer can apply.
The first chief data scientist, Ariga and the director of the GAO Innovation Lab, to be appointed at the US government’s accountability office, discussed a AI accountability framework He helped the government, industry, non -profit organizations as well as a forum from federal inspector general officials and AI experts and helped to develop.
“We are adopting an auditor’s approach to the AI accountability framework,” said Ariga. “Gao is in the verification business.”
The attempt to develop a formal framework began in September 2020 and included 60 % of women, 40 % of which were represented by minorities, to discuss more than two days. The effort was encouraged by the engineer’s desire to ground the framework of AI accountability in fact. As a result, the framework was first published in June, described by Ariga as “version 1.0”.
Trying to bring down the “height currency” on the ground
“We found that AI accountability framework has a high altitude currency,” said Ariga. “These are commendable ideas and aspirations, but what does their daily AI practitioner mean? There is a gap, while we see that AI spreads throughout the government.”
“We land on a life cycle approach,” which goes through the stages of design, development, deployment and permanent surveillance. The development effort stands on four “pillars” of governance, data, surveillance and performance.
Governance has reviewed what the organization has kept to oversee AI’s efforts. “The Chief AI officer can be in his place, but what does that mean? Can a person make changes? Is this multi -faceted?” At the system level inside this pillar, the team will review individual AI models to find out if they are “deliberately considering”.
For the pillar of the data, his team will test how the training data was evaluated, how much representative it is, and whether it is working accordingly.
Performance pillar, the team will consider the “social impact” of the AI system deployment, including whether it is in danger of violating the Civil Rights Act. “The auditors have a long track record of testing the equity,” said Ariga. We have given AI diagnosis a proven system. ”
Stressing the importance of permanent surveillance, he said, “AI is not a technology that you deploy and forget.” He said. “We’re preparing to keep the model and permanent monitoring for the delicacy of the algorithm, and we are scaling AI properly.” Ariga said the diagnosis would determine if the AI system meets this requirement “or the sunset is more appropriate,” Ariga said.
It is part of a conversation with the NIS on the official AI accountability framework as a whole. “We do not want an environmental system of confusion,” said Ariga. “We want a whole government view. We think this is a useful first step for AI’s practitioners to push at a meaningful height for AI’s practitioners. “
The DIU estimated whether the proposed projects meet the moral AI guidelines

In the DIU, a similar attempt is made to develop guidelines for developers of AI projects within the Gudman government.
Projects Goodman has been involved in the implementation of humanitarian aid and destruction response, forecasting, retaliation information and predictions. He heads the responsible AI working group. He is a faculty member of the University of Singleration, holds a wide range of consultative clients from inside and outside, and does a PhD in AI and philosophy from Oxford University.
In February 2020 the DOD adopted five areas Ethical principles for AI After 15 months of consultation with the business industry, Government Academia and AI experts in the American people. These areas are: responsible, equal, reliable, reliable and viable.
“They are well considered, but it is not clear for any engineer how to translate them in the need for a particular plan,” said Good in a presentation on AI -responsible guidelines in the AI World Government event. “This is the space we are trying to fill.”
Before the DIU even considers a project, they go through moral principles to find out if it goes through the master or not. Not all plans do. “There is an option to say that the technology does not exist or the problem is not compatible with the AI,” he said.
All the project stakeholders, including trade shopkeepers and within the government, need to be examined and verified and at least beyond legal requirements to meet the principles. “The law is not moving fast like AI, which is why these principles are important,” he said.
Also, the government is continuing to cooperate to secure and maintain values. “With these guidelines, we do not have to try to achieve perfection with these guidelines, but not to avoid destructive consequences,” said Gudman. “It can be difficult for a group to agree what is the best result, but it is easy for the group to agree with what is the result of the worst situation.”
Gudman said DIU guidelines as well as case studies and additional content will be published “Soon” DIU’s website, to help others take advantage of the experience.
Here are questions before the progress begins
The first step of the guide letters is to explain the work. “This is the only most important question,” he said. “Only when there is any benefit, should you use AI.”
The next is a benchmark, which needs to be set up to find out if the project is supplied or not.
Next, he reviews the candidate’s property. “Data is important for the AI system and is a place where many problems can exist.” Gudman said. “We need a special contract that owns the data. If it is confused, it can cause problems.”
Next, Gadman’s team wants a diagnosis sample of DATA data. Then, they need to know how and why the information was collected. “If consent was given for a purpose, we cannot use it for any other purpose without remedying consent,” he said.
After that, the team asks whether the responsible stakeholders have been identified, such as the pilots that can be affected if a component fails.
Next, responsible mission holders should be identified. “We need an individual for this,” Gudman said. “Often we have trade between the algorithm’s performance and its explanation. We may have to make a decision between the two. This type of decisions have a moral component and operational ingredient. So we need someone who is responsible for the decisions that are in accordance with the Command of Command in the DOD. ”
Finally, if matters go wrong, the DIU team requires a process to withdraw. “We need to be careful about abandoning the previous system,” he said.
Once all these questions are answered in a satisfactory way, the team moves towards the development stage.
In the lessons learned, Gudman said, “The matrix is key. And it may not be appropriate to merely measure the accuracy. We need to be able to measure success.”
Also, fit the technology for this task. “High risk applications require low -risk technology, and when potential loss is important, we need to trust more technology,” he said.
Another lesson has been learned to set expectations from commercial shopkeepers. “We need shopkeepers to be transparent,” he said. “When someone says that they have a proprietary algorithm, they cannot tell about us, we are very careful. We see this relationship as a co -operation. This is the only way we can ensure that AI is developed with responsibility. “
Finally, “AI is not magic. It will not solve everything. It should be used only when necessary and only if we can prove that it will benefit.”
Get more information AI World GovernmentOn, on, on Office of official accountability, On AI accountability framework And Defensive innovation unit Site