The quote below sounds suspiciously like the rationale the fictional Dr. John McKittrick used in the movie WarGames (1983) to let the War Operation Plan Response (WOPR) play World War III but have the decisions made by humans.
…the algorithm should be able to perform extremely efficient image recognition as well as natural language understanding processes. The final word belongs still to people but the intelligent algorithm should be there to avoid a “human error”.
The problem with that approach is that “the people” will probably follow the AI as they won’t be in a position to dispute it’s conclusions. This will be especially true if the AI is extremely efficient at image recognition and natural language processing.
In WarGames, the point was clearly made that in the event of a Soviet attack, there wouldn’t be time for humans to evaluate the courses of action available and they would probably follow the AI’s recommendations.
As we come to rely more and more on machine learning and AI algorithms, we will need to establish trust that they are running correctly for us to follow their suggestions.
Given the detailed mathematics, complex algorithms and huge datasets, it is going to be a real challenge for them to show the level of transparency required to establish that necessary trust.