According to Wikipedia:

The goal of the C2PA is to define and establish an open, royalty-free industry standard that allows reliable statements about the provenance of digital content, such as its technical origin, its editing history or the identity of the publisher.

Has anyone explored this standard before? I’m curious about privacy implications, whether it’s a truly open standard, whether this will become mandatory (by law or because browsers refuse to display untagged images), and if they plan on preventing people from reverse engineering their camera to learn how to tag AI-generated photos as if they were real.

    • ZickZack@kbin.social
      link
      fedilink
      arrow-up
      28
      arrow-down
      2
      ·
      1 year ago

      They will make it open source, just tremendously complicated and expensive to comply with.
      In general, if you see a group proposing regulations, it’s usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn’t really have a technical edge against anyone else, therefore they run to congress to “please regulate us”.
      Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

      There are so many ways this can be broken in intentional or unintentional ways. It’s also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
      This is not literally trying to force everyone to get a license for producing creative or factual work but it’s very close since you can easily discriminate against any creative or factual sources you find unwanted.

      In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.

    • warmaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I bet it won’t. And I bet the implementations of all 3 are out of spec, and send your shit to everyone else that will buy it.

        • warmaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 year ago

          I got fed up with MS bullshit and moved to Linux. Replaced illustrator with Inkscape and Photoshop with Photopea until I can learn how to use GIMP’s unintuitive UI.

          • HousePanther@lemmy.goblackcat.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            There is a learning curve with GIMP. Once you get past it, GIMP is great. It does about 90-95% of what Photoshop will do and that’s good enough for me. I’m fully on Linux as well. I run Arch and swear by it. I also like Open and Free BSD.

            • VelvetStorm@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I’m an amateur macro photographer and I love taking photos and doing light tweeking to them to make them more presentable for your average person but I am definitely not going to spend the required hours upon hours to learn to do the simplest things in gimp and dark table that i can learn PS and LR in a 10 min video or less.

              That being said I also refuse to pay a god dam subscription fee for something I used to own outright 20 years ago especially considering it can’t even stack or slab photos even 10% as good as zerene or helicon.

    • barryamelton@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      It will not matter if it is open source but it is backed into the HW. You will be their removed anyways with no way to change it.

      • tables@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        You will be their removed anyways with no way to change it.

        Did you type removed or does some system in the fediverse automatically censor words?

    • eth0p@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 year ago

      I glossed through some of the specifications, and it appears to be voluntary. In a way, it’s similar to signing git commits: you create an image and chose to give provenance to (sign) it. If someone else edits the image, they can choose to keep the record going by signing the change with their identity. Different images can also be combined, and that would be noted down and signed as well.

      So, suppose I see some image that claims to be an advertisement for “the world’s cheapest car”, a literal rectangle of sheet metal and wooden wheels. I could then inspect the image to try and figure out if that’s a legitimate product by BestCars Ltd, or if someone was trolling/memeing. It turns out that the image was signed by LegitimateAdCompany, Inc and combined signed assets from BestCars, Ltd and StockPhotos, LLC. Seeing that all of those are legitimate businesses, the chain of provenance isn’t broken, and BestCars being known to work with LegitimateAdCompany, I can be fairly confident that it’s not a meme photo.

      Now, with that being said…

      It doesn’t preclude scummy camera or phone manufacturers from generating identities unique their customers and/or hardware and signing photos without the user’s consent. Thankfully, at least, it seems like you can just strip away all the provenance data by copy-pasting the raw pixel data into a new image using a program that doesn’t support it (Paint?).

      All bets are off if you publish or upload the photo first, though—a perceptual hash lookup could just link the image back to original one that does contain provenance data.

  • ramble81@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    4
    ·
    1 year ago

    I know Blockchain is always in search of a solution, but is this one place where it may work? Take a hash of the image and store that hash in a chain, that way you can always hash the image and see if it’s been altered?

      • ramble81@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Publicly traceable and verifiable hashes of the images authenticity. Submitting a hash of the image can prove who submitted it and when and then any altering of the image would yield a different hash which you would know you’re not looking at the original image.

        • xep@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          How would that functionally differ from having an authority verify these hashes? Certificate authorities already provide a similar service, and C2PA would likely work in a similar way, sans any effort to implement “trustlessness.”

  • Nobilmantis@feddit.it
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Shit smells like Google’s browser add-on Google tells you to install if you want to opt-out of Google’s tracking. Nice.

  • barryamelton@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I typed b i t c h. It sucks that it got censored. It maybe depends on the community mods (I hope) or the instance…

  • Osa-Eris-Xero512@kbin.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    What I don’t understand is why having every smartphone or DSLR sign every image captured couldn’t solve this problem better and faster than something like this.

    • eth0p@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      From what I can tell, that’s basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.

      It’s basically git commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).