BMW tests next-gen LiDAR to beat Tesla to Level 3 self-driving cars::Tesla’s autonomous vehicle tech has been perennially stuck at Level 2 self-driving, as BMW and other rivals try to leapfrog to Level 3.

  • nathanjaker@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    My understanding was that the challenge in making the next leap in self driving was not based in hardware (detecting objects with cameras vs LiDAR), but in software. As in, it isn’t as difficult to detect the presence of objects as it is to make consistent and safe decisions based on that information.

    • RealJoL@feddit.de
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      But using LIDAR, you increase your data’s accuracy and dimensionality, giving you more options to play with. It probably won’t be a game changer, but it may be better than a camera only system.

      • Valmond@lemmy.mindoki.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Gathering more data, and being able to process it seems obvious as a way forward. How much better is this “new” LIDAR?

        Edit: seems Tesla cars doesn’t even use LIDAR…

        • TenderfootGungi@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          They did. And every other competitor does. Musk believes since humans can drive with only two eyes that cars should be able to as well. Maybe someday, but nowhere in the near future. Cameras miss too much and are easily blinded.

          • Patius@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            It’s also really stupid because the idea is to create a system that’s better than humans. And let me tell you, people miss stuff all the time when driving. Tons and tons of accidents are caused by “negligent” drivers who looked both ways and missed someone due to a visual processing error or literally not being able to see something.

      • CmdrShepard@lemmy.one
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        That’s not necessarily true. What you get is two separate things inputting raw data into a system that both need to be parsed. Sometimes, one won’t agree with the other and can cause issues with how the car thinks it should respond.

        Nobody has a fully working system at this point, so it’s premature to make claims about what hardware is and isn’t needed. It may very well be that LIDAR is a requirement, but until somebody figures it out, we’re all just speculating.

        • AlotOfReading@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You can, today, download an app and go ride in a self-driving car around multiple US cities. All of those cars use LIDARs. Sensor disagreement is not a major issue because sensor fusion is a very well-understood topic.

          • CmdrShepard@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yes but they geofenced those cars into areas with the most optimal conditions for autonomous driving. What happens when you take the car on the freeway, a suburban neighborhood, or a mountain pass?