• just_change_it@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    25
    ·
    edit-2
    7 months ago

    Jesus christ these headlines mislead everything.

    They were using machine learning to try and figure out what people were buying. Machine learning has lots of errors until you train it. The “hundreds of workers” were training it by telling it what each thing was. E.g. it was creating training data for it to learn from.

    The goal was to train ML enough so that humans were rarely necessary, obviously.

    • baru@lemmy.world
      link
      fedilink
      English
      arrow-up
      66
      arrow-down
      2
      ·
      7 months ago

      Jesus christ these headlines mislead everything.

      One article included how often employees needed to look at the cameras. That was the case in something like 80% of the times people went in to shop.

      The goal was to train ML enough so that humans were rarely necessary, obviously.

      The headline is pretty accurate. That might have been the goal, but they didn’t come close. And now they are closing down those stores.

      Seems that they utterly failed in the goal.

      Machine learning has lots of errors until you train it.

      These stores were open for a pretty long time. It’s not a given that it’s just a matter of training.

      • RealFknNito@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        22
        ·
        edit-2
        7 months ago

        The people who hate AI always seem to have no fucking idea how it actually works and it’s frustrating.

        People were required to teach the AI how to do it’s job. A ‘new employee’ is going to make frequent mistakes during training. Yes - it took a long time to train a program to identify a person, match them to their ID, identify a product, match it to the UPC, then make absolutely 110% sure that item remained on the person when they left with no mistakes. For it to be flawless even when you shove it into your backpack.

        Should the people training it have been paid more for their temporary position? Sure. Should Amazon have been transparent about how they were teaching the AI? Sure. They still did not rely on Indian workers for their stores, they relied on people to teach the AI, like doing a captcha, that the store relied on. Any human being would have done the trick, India just allows its people to be exploited the most apparently. Headlines are meant to make you click, not give you accurate information.

        Experiments in technology don’t always work. This was a bold plan that they gave years to which would have been a really cool thing to have. Just grabbing your shit and leaving? That’s like EZPass for retail. There was definitely money there, they just couldn’t get to it in time.

    • BakerBagel@midwest.social
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      3
      ·
      7 months ago

      Except they still had thousands if employees in India watching the surveillance tapes to see what people bought and charged them for it

      Amazon can claim this was a stop gap all they want, but the truth is that the technology behind the core concept isn’t there and they just pretended it worked so the project head wouldn’t have to explain why they are behind schedule and over budget. It’s the same as with their drone delivery service 10 years ago. All smoke and mirrors to make moron tech bros cream themselves

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        7
        ·
        7 months ago

        You need training data though I don’t understand what the problem is. Hell it doesn’t even matter if they never actually make the technology work that’ll be their problem. They can’t lie and tell people it works if it doesn’t but as far as I’m aware they’re not actually doing that.

        I don’t like amazing very much but they do enough crappy things for you to actually get upset about so it just seems odd that you would pick this hill to die on.

        • BakerBagel@midwest.social
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          7 months ago

          They gave up collecting that data though, that’s why they are shutting down the department. All they managed to do was outsource grocery store cashiers to India, which seems like an exceptional shitty thing to me

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      6
      ·
      7 months ago

      My goal is to build a fusion reactor.

      I will hire Indian call center workers to add fuel to my diesel generator until the fusion is up and running.

      This plan makes sense to certain people on the internet.

    • MajorHavoc@programming.dev
      link
      fedilink
      English
      arrow-up
      22
      ·
      7 months ago

      The goal was to train ML enough so that humans were rarely necessary, obviously.

      Yes, that’s the goal.

      There’s a long rich history of AI like outcomes being mimiced by just hiding the human who does the work. That’s actually the source of the name of Amazon’s own “Mechanical Turk” service.

      Not being actively watched by an army of underpaid workers is effectively still on the “someday…maybe” feature list for this thing, unless Amazon (famous for making delivery workers pee in soda bottles, and allowing warehouse workers to get heat stroke) somehow provides credible proof that they’ve actually grown past that.

      I, as someone with substantial professional ML experience, won’t take Amazon at their word, when they claim the ML has alleviated the need for the army of workers watching cameras. That’s bullshit marketing promise, until proven otherwise. Particularly coming from Amazon.

      Moving away from the people watching to using pure AI is well within the realm of possibility.

      But good AI maintainers cost more per hour to pay than the entire army of mechanical turk “trainers”. So I am skeptical of any claim that Amazon, in particular, has done the right thing here.

      So it’s very fair to assume you’re being watched in one of those stores, until real credible evidence is provided that you’re not.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      7 months ago

      They were using machine learning to try and figure out what people were buying. Machine learning has lots of errors until you train it.

      Machine Learning, no matter how well trained or advanced, is just doing a make-em-up.

      Besides that, in this case the experiment has been going on for years and humans were still doing like 70% of the work. It was a failure, that’s why Amazon shut it down

    • steeznson@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      That’s their excuse but it is convenient for them that in order to train the AI the workers need to follow the exact same steps as what an AI would be doing if it was sufficiently trained. We can’t say as outsiders to what extent the actual work is assisted by AI. Seems likely that it is largely a manual process.

      • Railcar8095@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        7 months ago

        I understand the spirit, but that’s how it goes. You have somebody doing the work, as you want the ML to do it, and then feed the data. It’s the same when they get oncology scans that have been diagnosed by well paid doctors, somebody who knows does and the machine tries to replicate.

        What very likely happened is that the failure rate platoed much higher than they expected, and all this time the goal was to lower it. Remember, it’s cheaper to have 0 people in India than 1, specially with AWS in mind.

        Moreover, even if the accuracy was incredibly high, they would still need people reviewing. You have to review random events to ensure the model keeps performing well and to evaluate the ones with low confidence or suspicious.