

The point is the idea, that in general a system could be applied where… say universally the same avitar is applied to everyone while on trial. The fact is “looking trustworthy”, is inherently an unfair advantage, that has no real bearing on actual innocence or guilt of which we know these bias’s have helped people that better evidence have resulted in innocent people getting convicted, and guilty people walking.
Theoretically a system in the future in which everyone must use an avitar to prevent these bias’s would almost certainly lead to more accurate court trials. Of course the one hurdle in my mind that would render it difficult is how to accurately deal with evidence that requires appearence to asses (IE most importantly eye witness descriptions and video footage). When it comes to DNA, Fingerprints, forensics, and hell the lawyers arguements themselves, there’s no question in my mind that perception with no factual use, has serious consiquences that harm any attempt to make an appropriately fair system.
Again I think our problem is the concept of what we are calling “AI”. IE I’m only talking of basically AI Generated art/avitars. If done in a consistant way I don’t think it even quite qualifies as AI. Really just glorified puppetry. There’s no “trustworhtyness”, because it doesn’t deal in facts. It’s job is literally just to take a consistant 3D model, and make it move like the defendent moves. It’s old tech used in movies etc… for years, and since it’s literally dealing in only appearence any “hacks” etc… would be plainly visible to any observers