Replies: 9 comments 4 replies
-
adversarial attacks at nothing new when it comes to ml. |
Beta Was this translation helpful? Give feedback.
-
it's an interesting idea, but the images on page 11 of the paper show that significant noise in the "cloaked" images will currently be a limiting factor in artists posting cloaked versions of their art. I imagine simply adding a little gaussian noise will remove a lot of this protection, but interesting to see how this develops :) |
Beta Was this translation helpful? Give feedback.
-
So the download link is up now, they just released it. Wonder what all the fuss was about. |
Beta Was this translation helpful? Give feedback.
-
Can we remove the protection by simply pass the image through the VAE? |
Beta Was this translation helpful? Give feedback.
-
The authors test gaussian noise and jpeg and they don't defeat their approach. |
Beta Was this translation helpful? Give feedback.
-
Here is a thought. So if you take that 4000x2000 resolution picture, shrink it down to size you can manage training and which is still manageable; like 1000x500px all that glaze is going to be lost. Look I think artists should have right to somehow opt-out or protect themselves, but this ain't a way to achieve it. I have used basic machine vision systems where, where certain patterns were used to force the system to ignore something and to calibrate the systems. The system primarily ran within the smart camera units, central unit only relaying commands according to their reports, and saving the images. To me this seems like something one could simply bypass: by them mentioned screencap method; fiddling with resolution; or simply crushing or flattening the image by adjusting it's properties. Narrow the range of information it has so only get the desired features. However: |
Beta Was this translation helpful? Give feedback.
-
It's quite ironic this utilizes SD to even work. I think the paper's wording tries to mask this, the website certainly seems to at least. How would artists feel if they found out that the tool they're using is directly built on top of what they're trying to fight against? The application downloads the SD weights directly to a user's machine. Clearly AI is only good if there's a benefit for them. Also, IANAL, but violating the GPL license in this way would require them to release the full source, no? It's all part of one single binary executable. https://twitter.com/ravenben/status/1636439335569375238 |
Beta Was this translation helpful? Give feedback.
-
I never thought there would be a market for artists to actively push for their work to be ignored and to have their influence diminished. I seriously want my ad blocker to have an option to block "art" that incorporates these techniques. If they truly want to have less influence then I'm more than happy to oblige them. |
Beta Was this translation helpful? Give feedback.
-
https://github.com/huzpsb/DeTox/ |
Beta Was this translation helpful? Give feedback.
-
So apparently, there is this new paper (non peer-reviewed) from Chicago University that aims to disrupt the AI training by protecting image with additional layer that should theoretically confuse AI learning algorithm.
https://arxiv.org/pdf/2302.04222.pdf
What do you think about it? Is this the same nonsense as before or can this potentially work?
The original site of the project here: http://glaze.cs.uchicago.edu/
Beta Was this translation helpful? Give feedback.
All reactions