Microsoft quietly added a new AI feature, called Cocreator, into its raster graphics editor included in every version of Windows since 1985. You need a Copilot + PC with an NPU that can deliver 40 TOPS or better to use it. So, you need to shell out at least $1,099 to get one of the new Snapdragon X Windows Copilot+ PCs that launched recently if you want your version of Microsoft Paint to come with Cocreator enabled.

However, Microsoft still requires you to sign in with your Microsoft account and be connected to the internet “to ensure safe use of AI.” According to Microsoft’s Privacy Statement, “Cocreator uses Azure online services to help ensure the safe and ethical use of AI. These services do content filtering to prevent the generation of harmful, offensive, or inappropriate content. Microsoft collects attributes such as device and user identifiers, along with the user prompts, to facilitate abuse prevention and monitoring. Microsoft does not store your input images or generated images.”

This is a nightmare for security and privacy-conscious users, especially as Microsoft recently blocked the last easy workaround to set up Windows 11 without a Microsoft account. Microsoft is likely doing this to stop unscrupulous users from generating illegal images like child and non-consensual deep fake pornography. However, storing this information is also a source of concern, as prompts a user typed in and stored on their account could be stolen. And, no matter how innocent, it could then be weaponized and used against them.

  • tal
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    The thing is that AI Horde relies on donated hardware. There are only so many people willing to donate relative to people who want to use.

    Vast.ai lets people rent hardware, but not on a per-operation basis. That’s cheaper then buying and keeping it idle a lot of the time, reduces costs, but it’s still gonna have idle time.

    I think what would be better is some kind of service that can sell compute time on a per-invocation basis. Most of the “AI generation services” do thus, but they also entail that you use their software.

    So, it’s expensive to upload models to a card, and you don’t want tonnage to re-upload a model for each run. But hash the model and remember what the last thing run on the card is. If someone queues a run with the same model again, just use the existing uploaded model.

    Don’t run the whole Stable Diffusion or whatever package on the cloud machine.

    That makes the service agnostic to the software involved. Like, you can run whatever version of whatever LLM software you want and use whatever models. It makes the admin-side work relatively light. It makes sure that the costs get covered, but people aren’t having to pay to buy hardware that’s idle a lot of the time.

    Might be that some service like that already exists, but if so, I’m not aware of it.