• tal
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Another quick off-the-cuff improvement: the drone feeds being sent today contain a lot of unnecessary information, more than is required for a human operator to guide them in. The less information that needs to make it through, and the more you can afford to cut out and use on redundancy in transmission, the more-jam-resistant the thing is. You can fall back to a unreliable-channel mode if need be for the last bit of the approach.

    Here’s a satellite source image. That’s lossily-compressed, JPEG, at 510,254 bytes. It’s pretty, but if you already know what you’re looking at and are trying to just ram it, you don’t need anything like that much information.

    Here’s the same image after I’ve run a Laplace edge-detection on it, denoised it, run a threshold on it (you could probably use a simple heuristic to select the threshold, but even if not, it’d be fine for the operator to manually choose a threshold), converted it to 1-bit, and then PNG-compressed it. That resulting frame is enough to keep identifying the objects in the image, enough that if you could see that frame, an operator could hold it on-target, and it’s only 30,343 bytes, about 6% the size.

    Then you can use the newly-free bandwidth to send forward error correction information – some folks here may have used it in the form of PAR2, popular in the piracy scene – so that if any N% of the data makes it through, the frame can be reassembled. Now it’s a lot harder to jam.

    And that’s an off-the-cuff approach that took me about 2 minutes just using the tools that I have on my system (GIMP and PAR2) and zero time trying to improve on it. You figure that if you pay someone who actually specializes in the area to bang on this a bit, you can probably get something rather better.