MicrostockGroup Sponsors


Author Topic: Adobe Stock - clarification on generative AI submission guidelines  (Read 6382 times)

0 Members and 3 Guests are viewing this topic.



« Reply #1 on: November 22, 2023, 17:20 »
+3
Very glad to see that Adobe plans to tag all genAI content in the collection - any timetable for that and will that include retroactively adding the credentials to the 26+ million images already there?

Also glad to see the note that "...we are committed to making it easier for people to identify which images are generative AI before licensing them on Adobe Stock.", but the screenshot included is no different from the current display. Is there something new to be done? If so, will it be an overlay as there is for Editorial images? And when will this be done.

I did a search for Gaza just now and it doesn't appear that any of the pseudo-editorial genAI images have been removed. Will the existing items that no longer comply with the Nov 21 submission rules be removed?

It's not just titles of pseudo-editorial images that are a problem "Updating our submission policies to prohibit contributors from submitting generative AI content with titles that imply that it is depicting an actual newsworthy event. " Lots of content that has a general-sounding title has keywords that have the image appear in a search for gaza, hamas israel war, etc. Customers may not realize that you do not offer editorial content (outside of the illustrative editorial) and just do a search. I don't think reviewers monitor keywords (given the massive amount of spam) but this new rule will be completely ineffective if keywords allow the unscrupulous to just avoid detection by keeping the title clean.

« Reply #2 on: November 22, 2023, 18:49 »
0
Thank you Mat for letting us know!

If you allow I would like to ask, what this part of the rules mean:
"... from submitting generative AI content with titles that imply that it is depicting an actual newsworthy event. If a contributor mistitles generative AI content, we will review and take appropriate action, including removing the content or terminating the contributor account."

A few questions immediately come to mind here, as I already have an account block behind me:

(Question I) "If a contributor mistitles..." - does this mean, only the title is checked? Or in other words, is it all about the title and the keywords don't matter?

(Question II) "... with titles that imply that it is depicting an actual newsworthy event" - does this mean, only if a title names a specific event like "35th birthday of ..."

(Question III) Can files that are already online or have been submitted before the rule change also cause an account block under the new rule? In other words, do we now have to check every single image that was previously accepted?


I have prepared two examples from my account via screenshot (see below) and wonder if these pictures describe a specific event or if the title is general enough.
One image of those has already been accepted, one image was submitted before the new rule came into force and of course some of the keywords do refer in some kind to actual events in the world, but not a specific event in the title.

Perhaps I'm being a little overcautious, but this rule extension has me worried that the next and perhaps final account block is just around the corner.

Have a sunny day@all,
Michael

« Last Edit: November 22, 2023, 18:53 by JustAnImage »

« Reply #3 on: November 23, 2023, 09:03 »
+4
Addendum to the previous post:
I have now checked all my images and deleted everything that even comes close to a possible interpretation of the new directive.

All in all, uploading AI images to AdobeStock now feels like doing something illegal.
The threat of an account block is constantly hovering over you and, at least for me, I feel under massive pressure here - it is not fun anymore to upload...

In the process, another question has arisen:
(Question IV) From when does the new guideline apply, as it is not yet mentioned as an item in the submission window?

« Reply #4 on: November 23, 2023, 09:10 »
+1
Lol... "funny" in a way...

The mainstream "news", owned (in a big way) by blackrock/vanguard (which in turn is owned by a bunch of psychotic sociopaths), already misleads/deliberatley misinforms in order to shape public opinion (& policies), and calls anyone who else that doesn't agree with their version of events "misinformation", hiliarious... "war" for them is profitable (obviously very sad/tragic for bystanders, innocents - sadly the ones organizing it don't care), so in many ways it already is actually a big show... that's why they are called news "stories", because many times it's actual misleading fiction... The "news" already did cherry pick/re-use irrelevant content with "regular" stock footage/photos to deceive/mislead the public... lol, they didn't really need "ai" images... (although if people haven't clued in yet - the "mainstream news" actually has been using deep fakes/"ai content" that they've generated, to test public reactions/see if they notice anything, etc, etc the last couple years in particular)...

But... funnier still, is if some people deliberately misled to try and depict an "ai" pic as a "genuine" event. It is important to note the difference whether the image was simply a "concept" (which is fine), as opposed to trying to deliberatley mislead (i.e., "photo from Nov 15th at the strip"), etc.

I'd say if the image was properly labelled, i.e., "concept of war", "illustration", etc (as oppose to saying "this is a photo taking on xx date, from this building" (which of course would be intentionally misleading)) - if the image is properly labelled as an illustration/concept of certain events... then I think it would be fine - because it is 'storytelling' - and that is what the news stations do - they cherry pick certain things to manipulate people's emotions into taking certain actions... maybe long time ago the "news" actually "informed" - now it is just (for the most part) one big manipulative machine...

So:

While I think they image (i.e., lets say "war") should just be general purpose (i.e., "concept of war" image), if someone did happen to attach specific "world" events (i.e., ukraine/gaza/whatever the flavor of the month is) - as long as it is clear that it is a "gen ai" image (and not misleading a 'real life' photo), I think it should be fine... The onus is on the person using the picture/event whether they accurately convey information, or just use it as part of their story telling, and whether they properly inform the reader/viewer that it is indeed, fiction...
« Last Edit: November 23, 2023, 09:19 by SuperPhoto »

« Reply #5 on: November 23, 2023, 09:14 »
0
Addendum to the previous post:
I have now checked all my images and deleted everything that even comes close to a possible interpretation of the new directive.

All in all, uploading AI images to AdobeStock now feels like doing something illegal.
The threat of an account block is constantly hovering over you and, at least for me, I feel under massive pressure here - it is not fun anymore to upload...

In the process, another question has arisen:
(Question IV) From when does the new guideline apply, as it is not yet mentioned as an item in the submission window?

If you were just illustrating a concept, and clearly labelled the image(s) as "gen ai", then I don't think you should have done that. From the screenshots you posted, it appears you did clearly show it was a "concept", which should have been fine.

The onus is on the person using the images whether they try and deceive, or use it honestly (i.e., "this is a CONCEPT of a certain war/event/going on"), or if they try and portray it as a real-world event (i.e., "Picture taken on Nov 15th by so & so").

Most "news stations" intentionally deceive/mislead to shape public policy/make $$$ through "eyeballs", etc. Last 3 years perfectly illustrated that.

Anyhoo - onus would be on the individual to properly inform their readers/viewers/etc that they are using an image/video/etc to "illustrate" a "concept", and that it is not actual real life footage/images/etc.
« Last Edit: November 23, 2023, 09:20 by SuperPhoto »

« Reply #6 on: November 23, 2023, 09:38 »
+1
If you were just illustrating a concept, and clearly labelled the image(s) as "gen ai", then I don't think you should have done that. From the screenshots you posted, it appears you did clearly show it was a "concept", which should have been fine.
Thank you for your opinion, which I basically share - but after I already have an account block behind me, which was quite expensive - I have become very careful.

Fortunately, my account block was only for 8 days, which was probably only so short thanks to the help of Mat and Mr Gomez, but it still cost me all my top 25 images, which are hardly ever bought anymore and even after 3 months and 4,500 additional assets, I'm still not at the same level as before the account block.

That makes you cautious and now I'd rather wait and see what Mat has to say about the questions - I can always upload the images again if it doesn't conflict with the guidelines.

« Reply #7 on: November 23, 2023, 11:12 »
0
Double post
« Last Edit: November 23, 2023, 11:15 by Big Toe »

« Reply #8 on: November 23, 2023, 11:14 »
+1
I have prepared two examples from my account via screenshot (see below) and wonder if these pictures describe a specific event or if the title is general enough.
One image of those has already been accepted, one image was submitted before the new rule came into force and of course some of the keywords do refer in some kind to actual events in the world, but not a specific event in the title.

Well, you use "Gaza" and "Gaza strip" as keywords for an image that was AI generated, if I understand you correctly and was therefor certainly not taken in Gaza and the keywords are strongly misleading at best. I would not use them, whether it may be a loophole in the current guidelines or not. Also, what does this picture have to do with tourism?

Regarding the second picture: It does not look at all like any picture of actual climate blockades I have seen. Who could have use for such a fake image, unless someone buys it by mistake, thinking it displays an actual event ?

« Reply #9 on: November 23, 2023, 11:30 »
0
Yes, it's hard to clearly understand the border between referring to real place or to use a name as a general term.

@MatHayward I have some example: after your update of AI submission rules I review my portfolio, found and changed few images that in the title refer to amazon forest. In my mind that was a way to describe the place in general term as a big forest, because the images are absolutely not related to the real amazon forest but... but now I'm in trouble to catch in my portfolio if I have some other images with generic reference to an environment like the "amazon forest".

So I have two questions:
First, can you better define if and what is accettable as a reference for real places in general terms? I think "Amazon forest" could be a good example

Second question: is it ok to CHANGE title and/or keywords of previously accepted images, or is it mandatory to delete them from portfolio if the title/keyword break the rules?

Thanks

« Reply #10 on: November 23, 2023, 12:07 »
+1
Well, you use "Gaza" and "Gaza strip" as keywords for an image that was AI generated, if I understand you correctly and was therefor certainly not taken in Gaza and the keywords are strongly misleading at best. I would not use them, whether it may be a loophole in the current guidelines or not. Also, what does this picture have to do with tourism?

Regarding the second picture: It does not look at all like any picture of actual climate blockades I have seen. Who could have use for such a fake image, unless someone buys it by mistake, thinking it displays an actual event ?
You're absolutely right, the picture wasn't taken in Gaza, because it wasn't taken at all.
However, certain basic elements of the generative AI (GAN network) have certainly been incorporated from images taken in Gaza - so it is not wrong and it is not right.
(But then we drift into the ideological discussion about whether AI images are really images or not.)

And this is exactly what my question above is about, whether the new rule only affects the title or also the keywords - and because I don't know, the images (as I wrote above) have already been removed.

On the subject of climate activists - well, just because you haven't seen this kind of gathering yourself doesn't mean that it hasn't happened in this form.
The picture was intended as general illustrative material on the subject.

However, on closer inspection (thanks to Ralf at this point) I have to admit to my shame that the AI picture with the climate activists is really bad - that was one of the first pictures I submitted.
There are clearly too many feet and too few hands in the picture - my bad :-)

I'll remove that picture later too...

« Reply #11 on: November 23, 2023, 12:08 »
0
Have a sunny day@all,

Rain is very usefull

« Reply #12 on: November 23, 2023, 12:12 »
+1
All in all, uploading AI images to AdobeStock now feels like doing something illegal.
The threat of an account block is constantly hovering over you and, at least for me, I feel under massive pressure here - it is not fun anymore to upload...

 ;D ;D ;D Why don't you move and do something else?

I use my reflex equipment, and I take pleasure in spending hours to produce beautiful photographs, in REAL LIFE. And I will perhaps continue to upload them to Adobe Stock, to see.
Adobe stock doesn't necessarily have an interest in continuing to look down on experienced photographers. Time will tell.
« Last Edit: November 23, 2023, 12:17 by DiscreetDuck »

« Reply #13 on: November 23, 2023, 12:20 »
+1
Have a sunny day@all,
Rain is very usefull
We definitely have enough rain here in northern Germany - a ray of sunshine every now and then is quite pleasant ;-)

« Reply #14 on: November 24, 2023, 09:37 »
+5
Not surprised to see more coverage - this time in The Washington Post (paywall) - of the masses of pseudo editorial genAI images on Adobe Stock.

"These look like prizewinning photos. Theyre AI fakes."

https://www.washingtonpost.com/technology/2023/11/23/stock-photos-ai-images-controversy/

The article raises many of the issues talked about here, and also points out, after noting Adobe Stock's change of policy and their blog post "As of Wednesday, however, thousands of AI-generated images remained on its site, including some still without labels."

It also appears that Adobe's change of policy came about after the Washington post and other publications contacted Adobe about all these pseudo-editorial images: "Adobe initially said that it has policies in place to clearly label such images as AI-generated and that the images were meant to be used only as conceptual illustrations, not passed off as photojournalism. After The Post and other publications flagged examples to the contrary, the company rolled out tougher policies Tuesday."

I did a few searches just now, and not only has nothing yet been removed, but there are new acceptances that weren't there a day or two ago



It's fine to state a commitment to fighting misinformation, but there needs to be action to follow up for it to mean anything:

"Adobe is committed to fighting misinformation, said Kevin Fu, a company spokesperson. "

Whereever the Washington Post used a photo from Adobe Stock's genAI collection they have slapped a big red banner saying "AI-GENERATED FAKE PHOTO" over it:



They also noted that some results appeared to be AI generated but were not labeled as such, although the example they link to has an image number (281267515) that is way too low to be genAI. Those start with 530+million ... or thereabouts:

"Several of the top results appeared to be AI-generated images that were not labeled as such, in apparent violation of the companys guidelines. They included a series of images depicting young children, scared and alone, carrying their belongings as they fled the smoking ruins of an urban neighborhood."

They also mention other categories such as Maui wildfires and Black Lives Matter Protests:

"It isnt just the Israel-Gaza war thats inspiring AI-concocted stock images of current events. A search for Ukraine war on Adobe Stock turned up more than 15,000 fake images of the conflict, including one of a small girl clutching a teddy bear against a backdrop of military vehicles and rubble. Hundreds of AI images depict people at Black Lives Matter protests that never happened. Among the dozens of machine-made images of the Maui wildfires, several look strikingly similar to ones taken by photojournalists."

I cannot fathom why Adobe Stock would wade into such a mess; the money made cannot be worth the risk of damage.

« Reply #15 on: November 24, 2023, 14:57 »
+1
ADOBE, exclusive provider of propaganda visuals?

« Reply #16 on: November 24, 2023, 18:52 »
+1
"Oivay!" :P

The onus is on the person using the images whether or not they abuse it. NOT the person making the image, providing the creator properly labelled it as such.
The onus that is on the person is whether or not they intentionally try to deceive - or - are up front that it is a 'depiction'/'concept'/etc.

I.e., for the contributor -

a) If they say "GEN AI", and its LABELLED as GEN AI, or it simply says something like "depiction of war", or "concept of israel/ukraine/flavor of the month/war" - then that is FINE - because they are being 100% upfront that is CONCEPTUAL. Otherwise, you get it to some really stupid things then/slippery slope - like, well, should you have a "pregnant woman" who is not actually pregant as a photo? Because "that" is misleading too... or the person who "stages" (with real photos) 'doing drugs', because, well "that ain't real either"... Or the NUMEROUS "real" photos (pre-"ai") - of staged "diverse boardrooms", and "diverse cheering" and "diverse blah blah blah"  -those weren't "real" boardroom shots, they weren't "real" businessmen... and the companies that purchase them to put them on their websites - they don't say "oh yes, this is a fake portrayal of what our company actually looks like"... do you start then saying "omfg! that is SO FAKE! FAKE CONTENT"??? Should ALL of THOSE "real" photos be taking off - because (a) they were "staged" and not "real" candid boardrom shots/etc, because it could 'potentially' be used for 'misinformation'? No. One uses their brain and discernment.

It would be like saying yuri acurs 20,000+ "peopleimages" shoudn't be used, or jacob lunds extensive profile, etc, etc, simply because they are indeed all "staged"/"fake" photos. They people pictured in the images were MODELS being "board of director" members, "on the beach", "doctors", "lawyers", "eating out", etc. That would be totally nonsensical to say that. It is up to the person USING the photos/images to use it in the proper context, and attribute the image in the proper context as well.

Media (cnn/fox/whatever) - ALL owned by the same blackrock/vanguard companies, who use their properties to bully others - and deliberatley mislead/'misinform', "spread misinformation" (such a silly stupid newspeak term). ("News" ain't "news" like it was before, they are STORIES).

Also - the "washington post" is owned by Jeff Bezos, and used as a weapon to attack other companies "it" wants to bully into doing certain things.

b) So - the onus is on the person USING the image - whether or not they correctly/accurately attribute it/etc. CNN (as well as other outlets, but them more so than others) have DELIBERATELY created fake/misleading content to mislead people on NUMEROUS occations to "get the public upset" in order to push certain public policies and get people to "accept" it - "feeling" its normal when it is not.

Anyways - back to the person USING it. The onus is on the person USING it to say whether it is fake or real footage - providing they were properly informed in (a). If they were indeed properly informed/aware that the footage from (a) (the contributor) was fake/staged, real staged photo op or ai generated - and then tries to pass it off as "REAL" - then the onus is on the person MISUSING the content. As long as it is properly labelled as ai gen/staged photo, and not directly misleading (i.e., the contributor didn't say "EDITORIAL: LIVE GAZA STRIP 11/15") - then that is fine.

Otherwise - it becomes super nonsensical and one could argue that there shouldn't be "any" stock photography/videography - because of the "potential for misuse" and not putting it in the proper context.

Not surprised to see more coverage - this time in The Washington Post (paywall) - of the masses of pseudo editorial genAI images on Adobe Stock.

"These look like prizewinning photos. Theyre AI fakes."

https://www.washingtonpost.com/technology/2023/11/23/stock-photos-ai-images-controversy/

The article raises many of the issues talked about here, and also points out, after noting Adobe Stock's change of policy and their blog post "As of Wednesday, however, thousands of AI-generated images remained on its site, including some still without labels."

It also appears that Adobe's change of policy came about after the Washington post and other publications contacted Adobe about all these pseudo-editorial images: "Adobe initially said that it has policies in place to clearly label such images as AI-generated and that the images were meant to be used only as conceptual illustrations, not passed off as photojournalism. After The Post and other publications flagged examples to the contrary, the company rolled out tougher policies Tuesday."

I did a few searches just now, and not only has nothing yet been removed, but there are new acceptances that weren't there a day or two ago



It's fine to state a commitment to fighting misinformation, but there needs to be action to follow up for it to mean anything:

"Adobe is committed to fighting misinformation, said Kevin Fu, a company spokesperson. "

Whereever the Washington Post used a photo from Adobe Stock's genAI collection they have slapped a big red banner saying "AI-GENERATED FAKE PHOTO" over it:



They also noted that some results appeared to be AI generated but were not labeled as such, although the example they link to has an image number (281267515) that is way too low to be genAI. Those start with 530+million ... or thereabouts:

"Several of the top results appeared to be AI-generated images that were not labeled as such, in apparent violation of the companys guidelines. They included a series of images depicting young children, scared and alone, carrying their belongings as they fled the smoking ruins of an urban neighborhood."

They also mention other categories such as Maui wildfires and Black Lives Matter Protests:

"It isnt just the Israel-Gaza war thats inspiring AI-concocted stock images of current events. A search for Ukraine war on Adobe Stock turned up more than 15,000 fake images of the conflict, including one of a small girl clutching a teddy bear against a backdrop of military vehicles and rubble. Hundreds of AI images depict people at Black Lives Matter protests that never happened. Among the dozens of machine-made images of the Maui wildfires, several look strikingly similar to ones taken by photojournalists."

I cannot fathom why Adobe Stock would wade into such a mess; the money made cannot be worth the risk of damage.
« Last Edit: November 24, 2023, 22:23 by SuperPhoto »

« Reply #17 on: November 29, 2023, 16:03 »
+2
No answers to any of the questions, but in monitoring new acceptances (gaza, hamas, israel war, palestine) the pseudo-editorial genAI collection continues to grow. Terms supposedly not allowed are in the titles and in the keywords.

Rules mean nothing if they're ignored with no consequences.

This Associated Press article headline says it all (this is not about images sourced from Adobe Stock, but at some point something similar will happen given what continues to be accepted)

"Fake babies, real horror: Deepfakes from the Gaza war increase fears about AIs power to mislead"

https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47

« Reply #18 on: November 29, 2023, 16:11 »
0
No answers to any of the questions, but in monitoring new acceptances (gaza, hamas, israel war, palestine) the pseudo-editorial genAI collection continues to grow. Terms supposedly not allowed are in the titles and in the keywords.

Rules mean nothing if they're ignored with no consequences.

This Associated Press article headline says it all (this is not about images sourced from Adobe Stock, but at some point something similar will happen given what continues to be accepted)

"Fake babies, real horror: Deepfakes from the Gaza war increase fears about AIs power to mislead"

https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47

The news ALREADY misleads at best (especially ppl @ "associated press"), when they aren't outright lying or manipulating. Seems they are just jealous they may not have a monopoly on that.

Onus is on the individual or entity USING THE ASSET. NOT the creator, providing the creator is not overtly misleading (i.e., as long as the creator does not say "LIVE VIDEO FOOTAGE" or "PHOTO TAKING ___") and are open/honest about it being GenAI - it's FINE.

Otherwise - it gets really stupid - in that - one can argue that ALL "stock photography" that was NOT taking in "natural settings", is "misused" and "misinformation".
Onus is on the USER TO USE IT CORRECTLY.

« Reply #19 on: November 29, 2023, 16:11 »
0
---------

« Reply #20 on: December 01, 2023, 01:25 »
+1
Mat, can you please clarify what exactly is allowed and what not?

It would be a real bummer if Adobe srated randomly banning contributors again, because their rules are unclear.

In this thread the article writing about Adobe's AI image they show examples of for example a refugee girl or a riot.
https://www.microstockgroup.com/fotolia-com/ai-dumpster-fire-policies-land-as-in-trouble-again/new/#new


To my understanding, these images do not claim to have been taken at any particular real life event and should be okay with Adobe. But what exactly is defined as " actual newsworthy event."?

If an image shows a random riot or refugee, without specifying that it was taken at any particular location, event or time, is it still an "actual newsworthy event."?
 The article seems to think so, but to my understanding these images are just concept, not claiming to be from any event.


« Reply #21 on: December 01, 2023, 05:53 »
0
Mat, can you please clarify what exactly is allowed and what not?

It would be a real bummer if Adobe srated randomly banning contributors again, because their rules are unclear.

In this thread the article writing about Adobe's AI image they show examples of for example a refugee girl or a riot.
https://www.microstockgroup.com/fotolia-com/ai-dumpster-fire-policies-land-as-in-trouble-again/new/#new


To my understanding, these images do not claim to have been taken at any particular real life event and should be okay with Adobe. But what exactly is defined as " actual newsworthy event."?

If an image shows a random riot or refugee, without specifying that it was taken at any particular location, event or time, is it still an "actual newsworthy event."?
 The article seems to think so, but to my understanding these images are just concept, not claiming to be from any event.

From me reading what they've written, my belief/interpretation is this:

a) If it is in 'general' terms (i.e., "Girl in Riot", "War Torn Girl"), etc - then that is fine (or even for that matter, something like "Illustration Depicting Ukraine War"). Because it is clear it is an illustration/concept/generated/etc.
b) But if you pretend/try and deceive/make it look like a real photo, i.e., "Real Photo of Girl standing in Kiev, Ukraine on Sept 20th, 2023", then that is deceptive/misleading, and not okay, because it is trying to mislead/show that it is 'real' footage/etc, when it clearly is not.

« Reply #22 on: December 01, 2023, 08:03 »
+1
Mat, can you please clarify what exactly is allowed and what not?

It would be a real bummer if Adobe srated randomly banning contributors again, because their rules are unclear.

In this thread the article writing about Adobe's AI image they show examples of for example a refugee girl or a riot.
https://www.microstockgroup.com/fotolia-com/ai-dumpster-fire-policies-land-as-in-trouble-again/new/#new


To my understanding, these images do not claim to have been taken at any particular real life event and should be okay with Adobe. But what exactly is defined as " actual newsworthy event."?

If an image shows a random riot or refugee, without specifying that it was taken at any particular location, event or time, is it still an "actual newsworthy event."?
 The article seems to think so, but to my understanding these images are just concept, not claiming to be from any event.

From me reading what they've written, my belief/interpretation is this:

a) If it is in 'general' terms (i.e., "Girl in Riot", "War Torn Girl"), etc - then that is fine (or even for that matter, something like "Illustration Depicting Ukraine War"). Because it is clear it is an illustration/concept/generated/etc.
b) But if you pretend/try and deceive/make it look like a real photo, i.e., "Real Photo of Girl standing in Kiev, Ukraine on Sept 20th, 2023", then that is deceptive/misleading, and not okay, because it is trying to mislead/show that it is 'real' footage/etc, when it clearly is not.

That's also my understanding, but I really want it confirmed by Mat officially, because reading that article not everyone seems to see it that way and I really do not want Adobe to ban my account for breaking rules I was not aware of breaking. Adobe has been super ban-happy lately, so I want to make absolutely sure that not only I understand it that way, but Adobe staff as well.

« Reply #23 on: December 14, 2023, 22:05 »
+2
Thumbnails of AI images now have an overlay in the bottom left corner identifying them as AI when you rollover them. There is also a new warning in the file details for AI images: "Editorial use must not be misleading or deceptive."


 

Related Topics

  Subject / Started by Replies Last post
234 Replies
46533 Views
Last post May 27, 2023, 12:12
by cobalt
10 Replies
3950 Views
Last post April 28, 2023, 00:15
by wordplanet
52 Replies
10909 Views
Last post July 13, 2023, 06:15
by Justanotherphotographer
18 Replies
3800 Views
Last post July 24, 2023, 12:32
by MxR
185 Replies
31764 Views
Last post October 17, 2023, 02:37
by Deyan Georgiev Photography

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors