• General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      A Variational AutoEncoder is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

      This attack targets the VAE. The image is altered so that the latent representation looks like a very different image, but still roughly the same to humans. The actual image AI works on a different image. Obviously, this only works if you have the right VAE. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked.

      I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

      • watersnipje@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Thank you for explaining. I work in NLP and are not familiar with all CV acronyms. That sounds like it kind if defeats the purpose if it only targets open source models. But yeah, makes sense that you would need the actual autoencoder in order to learn how to alter your data such that the representation from the autoencoder is different enough.