According to a report of The meta plans to change the task of evaluating the potential losses of their products from human reviewers rather than a heavy inclination on AI to accelerate this process. The internal documents seen by this post note that the purpose of the meta is up to 90 % of the risk assessment AI, NPR Reports, and even in areas such as youth risk and “integrity”, are considering using AI reviews, including violent materials, false information and more. Unknown current and former meta employees who used to talk NPR Warning that AI could ignore serious threats that a human team would be able to identify.
Meta platforms updates and new features, including Instagram and WhatsApp, have already been subjected to human studies before colliding with the public, but Meta has allegedly doubled the use of AI in the past two months. Now, according to NPR, Product teams have to fill a questionnaire about their products and submit it to the AI ​​system to review it, which usually provides a “quick decision” that includes the risk sector identified. After that, they will have to solve whatever requirements are offered to solve the problems before the product is released.
A former executive of Meta said NPR Reducing this check means that you are creating high risks. Before causing problems in the world, there is less likely to stop the negative exterior of product changes. “In a statement NPRMeta said it would tap “human skill” to evaluate “novels and complex matters”, and leave “low -risk decisions” to AI. Read on the full report .
This came a few days after the release of the meta – the first and first of this year. According to the report, the context of the changes has decreased the amount of uncertainty reduced materials. But bullying and harassment increased slightly, as well as violent and graphic materials.