Qiao Yu explained, "Taking autonomous driving as an example, if we allow autonomous driving to have a certain accident probability, then if we set a standard, it would probably be acceptable as long as it has a lower accident rate than humans driving themselves, right?"
Zheng Xiweng nodded.
Qiao Yu continued, "However, in the human societal environment, drivers with good driving skills, good habits, and who are cautious and law-abiding often have accident rates far lower than those accustomed to reckless driving.
But if the steering wheel is handed over to artificial intelligence, the probability of accidents is equal for different types of drivers. Whether they are rule-abiding or not, it is the same. Doesn't this count as an advanced version of the trolley problem?
The most crucial thing is, if an individual made no mistake at all, and the result of an AI error caused harm, how much responsibility will society assign to the AI?
