• BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 hours ago

    The judge was angry that this guy was pretending to have speech issues so he can use her courtroom for free publicity for his AI tool business, watch the entire video, don’t just read a click bait headline

  • hedgehogging_the_bed@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    You can either represent yourself or claim you can’t communicate and need assistance. You don’t get to claim to represent yourself and then demand they let you have an AI representative. Just hire a lawyer to represent you, that’s what they are literally there for.

    • hedgehogging_the_bed@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Please also see people who need employees to assist with the touchscreen ordering and self-check. If you want to talk to a person, please use the line for talking to a cashier.

  • TheFogan@programming.dev
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    2
    ·
    2 days ago

    I mean honestly without the theoretical misdirection, I’d find this one of the better examples of a reasonable use of AI within a courtroom. IE it sounds like he asked to represent himself. He presented a video which, to my knowledge all the arguements were written by the person himself. Second the judge asked who it was he said the avitar is AI, presenting his arguements.

    So in short, the only thing that’s attempted to be bypassed, are biases related to his appearence and speech.

    IMO this concept could be the real future of trials if done right. Imagine say if we used say extreme facial tracking AI, hid the defendent’s actual appearence, but allowed the defendants to use avitars, that still map out any facial expressions and body language they make during the trial… but actually conceal the defendent’s actual race and appearance. We could literally be looking at the one solution to the racial bias… the reality that with the same evidence, race plays a huge part in conviction rate and harshness of sentences.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      Not that AI is the most effective representation or that it should replace public defenders, but this doesn’t seem far off from scolding a defendant for using Google to research his arguments.

    • madame_gaymes@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      2 days ago

      It’s a really interesting thought, and under ideal circumstances would work IMO. Obviously things are never ideal and there would be all sorts of roadblocks and gotchas as something like this was developed. Things we could think of now, and other things we probably couldn’t. Not to mention the whole problem of, “who develops it and how much trust can you give them?”

      As I was reading the idea, it made me think of the suits from A Scanner Darkly that the undercover narcs wore. Basically heavily obfuscated the voice and displayed always-changing patchwork human features to anyone observing from the outside, including trying to hide body shape. Something like that could get similar results. Obviously a video filter would be much easier to develop than a sci-fi suit, but still.

      A Scanner Darkly movie representation of the suit

    • Zwuzelmaus@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 days ago

      the only thing that’s attempted to be bypassed, are biases related to his appearence and speech.IMO this concept could be the real future of trials if done right.

      How do you know if it is done right or wrong?

      It is fake, and it is a manipulative kind of fake.

      You assume some honorable purpose, but that isn’t the only possible purpose.

      Even “bypassing biases” would be a kind of manipulation, and you can never know what other manipulation is going on at the same time. It could exploit other biases. It could try other tricks that we are not evil enough to imagine, and it would be “better” at it than any real human.

      • TheFogan@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        The point is the idea, that in general a system could be applied where… say universally the same avitar is applied to everyone while on trial. The fact is “looking trustworthy”, is inherently an unfair advantage, that has no real bearing on actual innocence or guilt of which we know these bias’s have helped people that better evidence have resulted in innocent people getting convicted, and guilty people walking.

        Theoretically a system in the future in which everyone must use an avitar to prevent these bias’s would almost certainly lead to more accurate court trials. Of course the one hurdle in my mind that would render it difficult is how to accurately deal with evidence that requires appearence to asses (IE most importantly eye witness descriptions and video footage). When it comes to DNA, Fingerprints, forensics, and hell the lawyers arguements themselves, there’s no question in my mind that perception with no factual use, has serious consiquences that harm any attempt to make an appropriately fair system.

        • Zwuzelmaus@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          3
          ·
          2 days ago

          say universally the same avitar is applied to everyone while on trial.

          The one and only “good” AI. Trustworthy for everybody?

          I do not believe in that.

          First you would need to decide on the one and only company to provide that AI. Then someone must prove that it is good and only good. Then it must be unhackable (and remain so while technology evolves).

          All of this is hardly feasable.

          • TheFogan@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Again I think our problem is the concept of what we are calling “AI”. IE I’m only talking of basically AI Generated art/avitars. If done in a consistant way I don’t think it even quite qualifies as AI. Really just glorified puppetry. There’s no “trustworhtyness”, because it doesn’t deal in facts. It’s job is literally just to take a consistant 3D model, and make it move like the defendent moves. It’s old tech used in movies etc… for years, and since it’s literally dealing in only appearence any “hacks” etc… would be plainly visible to any observers