AI engineers of the government are viewed as a challenge to AI Ethics

by SkillAiNest

AI engineers of the government are viewed as a challenge to AI Ethics

By John P. Desmond, AI Trends Editor

Engineers have a tendency to see things in unclear terms, which some people call black and white terms, such as right or wrong and good and good and bad. It is very important to consider ethics in AI, which, with wide brown areas, has made it difficult for AI software engineers to apply it in their work.

It was a way of a session about standards and the future of moral AI AI World Government This week, Alexandria, Wa, in personally and practically the conference.

One of the overall impression of the conference is that AI is practically discussing AI and ethics in the federal government’s broader businesses, and in all these different and independent efforts, there is a consistency of points.

Bait in Shulki Leach, Associate Professor, Engineering Management, University of Windsor

“We engineers often think about ethics as an ambiguous thing that no one really describes,” said Bait Anne Shivelki Lich, an associate professor of engineering management and entrepreneurship at the University of Windsor, Ontario, Canada, Bait Anne Shivel. “Tell me about being moral in search of solid obstacles for engineers. It really becomes complicated because we do not know what it really means.”

Shell Lake started her career as an engineer, then decided to do a PhD in public policy, a backdrop that enables things as an engineer and as a social scientist. “I was PhD in Social Science, and I was pulled back into the world of engineering where I was involved in AI projects, but I am based in the Mechanical Engineering Faculty,” he said.

He said, a goal of the engineering project, which describes a combination of the purpose, the desired features and functions, and a combination of obstacles, such as budgets and timelines “standards and regulations become part of obstacles.” “If I know that I have to comply with it, I will do it. But if you tell me that it is a good thing, I can or I can’t.”

Shelke is also serving as the Chair of the IEEE Society Committee on the social implications of technology standards. He commented, “It is important to say to the people of the IEE, such as the industry, to say that it is what we think we should do as an industry.”

Some standards, such as mutual cooperation, do not have the power of the law, but the engineers comply with them, so their systems will work. Other standards are described as good ways, but they do not need to be followed. “Whether it helps me achieve my goal or hinders me to reach the goal, that is how the engineer sees it.”

The pursuit of AI morality is described as “dirty and difficult”.

Sarah Jordan, Senior Lawyer, Future of Privacy Forum

Future senior lawyer of the Privacy Forum, Sarah Jordan, at a meeting with Shell Lake, works on the moral challenges of AI and machine learning and is an active member of the IEEE Global Initiative related to ethics and sovereign and intelligent system. “Ethics is dirty and difficult, and full of context,” he said. We have the spread of ideas, framework and construction. “

Shelke Leake offered, “There is no final result of morality. After that is being processed. But I am looking for anyone to tell me that I need to do my job, to tell me how to be moral, what rules I have to follow, to remove ambiguity.”

He said, “Engineers are closed when you come in ridiculous words, as they do not think of ‘antiological’, they have been taking mathematics and science since the age of 13.

It has made it difficult to add engineers in efforts to draft moral AI standards. “The engineers are missing from the table,” he said.

He concluded, “If their manager tells them to find out, they will do so. We need to help engineers cross the bridge half -way. It is important that social scientists and engineers do not withdraw from it.”

Leader panel describes the integration of ethics in AI development methods

The theme of ethics in AI is further exposed in the US Naval War College of Newport, RI’s curriculum, which was established to provide modern studies for US Navy officers and now teaches leaders with all services. Ross Kofi, a military professor of national security affairs at the institute, participated in the AI ​​World Government’s panel of AI, ethics and smart policy leader.

“The moral literacy of the students increases over time because they are working with these moral issues, which is why this is an important matter because it will take a long time,” Kofi said.

Panel Member Carol Smith, a senior research scientist at Carnegie Milen University who studies human machine talks, has been involved in integrating ethics into the development of AI system since 2015. He cited the importance of “Demisting” AI.

He added, “I am interested in understanding what kind of conversation we can create where humans are properly relying on the system with which they are working, not more or more.”

As an example, it cited the features of the Tesla Auto Pilot, which implement the driving car capacity at a degree but not completely. He said, “People assume that this system can operate more wider activities.

Tika Ariga, a member of the first chief data scientist panel appointed at the US government’s accountability office, and the director of the GAO’s Innovation Lab, see a difference in AI literacy for the young manpower coming to the federal government. “Data scientists do not always involve ethics,” he said. The responsible AI is a commendable construction, but I am not sure that everyone buys in it. We need the responsibility of going beyond the technical aspects and being accountable to the last user, which we are trying to serve. “

In the IDC Market Research Firm, Smart cities and communities research VP, PhD, panel moderator, Allison Brooks, asked if the principles of moral A could be shared within the boundaries of the nations.

“We will have a limited potential for every nation to be align, but we have to harmonize in some ways that we will not allow the AI ​​to do,” said Smith of the CMU’s CMU.

Panel experts supported the European Commission for ethics issues, especially in the realm of enforcement.

Ross of Naval War colleges recognized the importance of finding a common basis around AI morality. “From a military point of view, our mutual cooperation needs to go to a whole new level. We need to find a common basis with our partners and our allies about what we will allow the AI ​​to do and what we will not allow AI to do.” Unfortunately, “I don’t know if this is being debated or not,” he said.

Smith suggests that the debate on AI ethics may be as part of some current contracts

Many AI ethics, framework and road maps presented in many federal agencies can be difficult to follow and be made permanently. “I hope over the next or two years, we will look together,” said Tech.

To access more information and recorded sessions, go AI World Government.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro