How Artificial Intelligence (AI) Will Affect the Design Industry and How We Can Be Ready for The Future of Design
Fear is not the only response to developing technologies, understanding it can put you way ahead of the curve.
Graphic Designers can breathe easy, Illustrators might be shaking in their boots.
I think the biggest question I get asked about the rise of AI (or Artificial Intelligence) programs like MidJourney, Chat GPT or Adobe’s new Firefly is, “Will designers and creatives be replaced by these programs?” There is a lot of fear with the emergence of software and algorithms taking over our industry, and rightfully so. Some of the artwork and illustrative work coming from MidJourney, for example, is outright surreal and stunning, with artwork that would have taken days or weeks for a human to produce.
And while there is some incredible digital art being created using AI image generation tools like DALL-E or MidJourney, it still has an AI “look”. It is not human, after all, and can have a hard time with backgrounds and details. Take a look at this amazing image created from MidJourney. The main character looks flawless, but a quick glance in the background will reveal a building that has a chopped up looking appearance without definition. The little girl’s forehead area seems a bit disproportionate, but a human illustrator can easily tweak and change almost anything. An AI bot only knows what it is fed. Still, for most people it is a beautiful image that can easily be mistaken for a real painting.
This can lead to an almost distorted reality, like you are staring right into an acid trip, a surreal dream or a nightmare.
AI image bots have no clue what ice cream tastes like or the joy you feel when you lick a cold popsicle on a hot day. It has no clue what to do with emotions except to take what it has learned from studying other photos on the internet that are tagged with that emotion and produce what it thinks is reality. This can lead to a disconnect between human created visuals and artificial ones. Illustrators still at least have that to lean on.
The last two years have seen some wonderful AI lead technological advancements in image and text generation. You might have heard that ChatGPT took over the Twitter and YouTube universe over the last year with people finding really interesting ways to write plays, books and courses using the text generation AI tool.
I even did a YouTube video where I got to ask it a bunch of burning questions I had about graphic design like, “How to Make a Million Dollars as a Graphic Designer”. I also asked it to generate design prompts for me so I can build my portfolio with realistic mock client work, which can all be super supportive and helpful as a designer.
So where does this leave creatives?
So, when am I going to be replaced again, exactly? First, let’s look at what is currently out there. How does AI tools stack up against real human designers? What better way to test this out than to try to use AI to generate a finished logo design.
I am happy to report that logo design is best saved for real humans and not bots.
I typed in a few really good keywords and this logo generator popped out some undesirable results. Most I do not connect with at all. The one on the bottom left does not even look like it tried. Well, it did not try, it is digital and not human. How could it possibly take my name and a few keywords and truly understand my uniqueness as a creative and designer? It never had a chance to review my portfolio, ask me my favorite designers or ask me how I treat my clients differently. Right now, there is no way to communicate that to an algorithm.
Let’s try the more popular AI tool MidJourney to see how it handles logos.
We will take a look at writing prompts a bit later on, but I instructed it to create a logo for my personal brand and that it must include the words “Lindsay Marsh”. I also put a few other keywords in there like branding and creativity. You can see the results below. I see a few letters in my name but I also see that it is struggling with typography and text.
I am also not the only one with this issue as I located several other logo designs where the typography never quite fell in line with what was requested. This might be because this tool specializes in compiling images and is not trained with composing and arranging typography, letters or language. Since the cornerstone of a logo design is its typography these image generation tools may not be the answer.
While illustrations for a logo can look really nice using an AI image generator, typography is best left up to the human experts— us.
Once again, we can sigh a big sigh of relief, human designers are still very helpful so maybe society will keep us around a bit longer. What about other aspects of design? Illustration is where I kept seeing more opportunities for AI to really take over. Pattern design is big business and licensing pattern designs and selling them on ETSY can be very profitable. There have been creators who found ways to use MidJourney to create really nice looking seamless pattern designs for iPhone cases, blankets, puzzles and more. The applications are endless. I found this video really interesting and wanted to include it here.
Watch how this YouTuber creates seamless patterns that he can sell on Etsy in a matter of minutes using MidJourney.
Writing prompts will be an entire job in itself
How AI bots and image generators work is the user has to input a prompt. This directs the bot what to produce and in what quality, style, resolution or size. It can also be told which images to draw inspiration from. It just takes an hour or two to get the very basics of Prompt writing down. After generating a few images without the desired results, you can quickly see how it takes a full-time study to become an expert at AI generated images.
I do believe one day soon full-time jobs will be prompt writing for larger companies looking to utilize AI tools. In fact, I just came across one today on Indeed.com.
I am not going to detail how to do prompt writing in this article as there is just so much there to learn. An example of the basics of writing a script from the MidJourney Resource site is here. It is broken down into a few parts. First, the image prompts (if you have any), then the description of what you are looking for, and lastly the parameters (of which there are hundreds) that allow you to create 4k, 3D looks and other popular keywords for different image styles.
Matt Wolfe has an excellent video here on writing prompts in MidJourney on his YouTube Channel. It is worth spending a few hours dabbling in this to understand the power and impact this has on the illustration, pattern design and art worlds. Once again, all AI tools will require some sort of understanding of prompt writing, not just MidJourney but DALL-E and other alternatives.
Where does AI source its photos to create such masterpieces?
It is hard to not talk about the elephant in the room. As we discussed before, MidJourney, DALL-E and other AI photo generating tools took a huge swath of photos from the entire internet to train its AI bots. That means copyrighted photos, illustrations and graphics were compiled together to teach the bot what the user might want to see.
There is an interesting article that claims that one of the founders of MidJourney knew this was the case and admitted to not knowing what to do about giving proper copyright ownership to artists. You can read that article here.
When creating AI art you can also add reference images to help the bot further detail what you are looking for. There is no way to prevent users from uploading copyrighted work from Google search into the prompts. That means if you use images that do not have a Creative Commons 0 license or a public domain license you could be opening yourself up to being sued for deriving artwork from copyrighted images.
AI tools have infringed on creators rights.
This was all going to come to head at some point. Several artists have banded together to sue MidJourney and other art portfolio websites like Deviant Art for allowing copyright derived AI work to be posted without giving proper credits to the authors.
The following blurb was taken from MidJourney’s Wikipedia page.
In January 2023, three artists: Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.[30]
This could be a tricky court case.
On one hand, AI tools have been trained by absorbing data from most of the internet (a gigantic source of data). It could be hard to prove individual copyright infringement from images derived from such a large dataset. On the other hand, there have been cases where individual artists can type their name in an AI prompt and clearly see how their artwork was used to formulate the results, albeit not an exact copy.
I think the line will need to be drawn at where artists can claim copyrights on AI images and where AI tech gets to be “inspired” to create something new enough to not be an original copy. It will be an interesting case to follow and I am not one to pick sides too early.
Who owns the work created by AI image generators?
If I put in a prompt into an AI text or image generator do I own the prompt to create the image/text or the image/text itself? It is a complex legal issue, which if you wanted to read more about it here in this article (which is amazingly detailed), it is worth the read.
A human element has to be present for any copyright claim to ownership to take place. That means AI tech cannot claim ownership of the images. AI artwork does not really have an owner based on current copyright laws but according to the terms of use of some of these programs, it does assign the ownership of an image to the creator or prompt writer. But, can you hold that copyright claim in a court of law would be the next question, as nothing can stop third party companies from taking you to court for using their brand image in your AI generated photo. We are truly living in a new digital Wild West.
So, what to do if you want to take the safe and high road and protect a “real” artist’s work and make sure they get the proper credits? First of all, I would avoid putting in specific artist’s names in AI prompts and secondly there are other tools that are coming our way.
Adobe’s new FireFly tool is the answer to this Issue
Adobe announced in the Spring of 2023 that it will release its new FireFly AI image generation tool to public beta. Yours truly signed up for the beta and is eagerly awaiting my acceptance (a separate article or video will be released detailing my experience with this new Adobe tool).
It claims on its website that it uses only legal artist approved photos to generate its images. I can start to feel much better about using AI artwork now! Yippie!
“The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.”
As Firefly evolves, Adobe is exploring ways for creators to be able to train the machine learning model with their own assets so they can generate content that matches their unique style, branding, and design language without the influence of other creators’ content. Adobe will continue to listen to and work with the creative community to address future developments to the Firefly training models.
I have heard from several creators that FireFly does not give results as effective and polished as tools like DALL-E or MidJourney.
This is because Adobe Firefly is sourcing its reference photos for its AI only from an approved library, some results have been a bit more generic and “stock photo like”. It will be interesting to see how it evolves and improves past its beta period.
If Adobe is getting into this, you better pay attention.
Expect a lot of FireFly’s tools to be integrated into several popular graphic design tools in the coming years— yes, I mean Photoshop and Illustrator. This image above allows you to vectorize a sketched logo and create different variations of it in vector format. Just imagine the current Adobe Illustrator Trace tool, but on steroids.
The future of design utilizes lots of AI tools and that is nothing to fear.
Looking at this wide array of tools Adobe is dreaming up for AI and their software is pretty exciting. Imagine doing a text prompt and generating a layout for a social media post in a few seconds or being able to take a pencil sketch and turn it into a fully colored illustration in seconds.
Some of the more mundane receptive tasks will be completed for us and now we can finally focus on the most critical design choices like headlines, style, color, photos and how we represent brands and connect with audiences using visuals.
Graphic Designers finally get to move from being pixel pushers to real “big picture” thinkers in the visual space. At least this is how I think a forward thinking designer should spin this whole AI drama.
Some of these tools have existed in Adobe Photoshop for years.
A few months ago I used some of the updated neural filters available in Adobe Photoshop and used the Photoshop Landscape Mixer Filter to take one photo and create many different seasons from it. I asked you guys on my Instagram which photo was the original season? The majority of you picked the AI generated images instead of the original photo, which was Summer. If you look at the trunk of the tree to the left you can see where the trunk does not connect midway up in both Fall and Winter seasons. If you generate a lot of AI images you tend to notice these little imperfections and can spot an AI work a mile away. Remember, AI generated work has this distinct “look”.
We are still years away from AI tools taking over our current tasks as designers.
Most companies are putting all of their research efforts into expanding their AI tool offerings, including Adobe, but they are still a ways off, years even. This is all still very new and most current tools are just not polished enough to replace you or I in any capacity a designer.
I do think it is still wise to start learning how to work with AI image generation tools and learn how to tweak and create prompts that generate the right visuals for us. There is no doubt this will be a new part of our job description in the next 5-10 years and you want to be ready and prepared.
Be open to new tools that may help to evolve our craft.
Make sure to leave a comment at the end of this article with thoughts, questions or queries.
Graphic design cannot remain how it is today, forever.
Even tools like Canva are slowly eroding opportunities in social media, poster and stationary design work. Recently Canva has really been innovating, making it easier for non-designers to create video, animations and really polished layouts and design templates.
If graphic designers just use fear to hide from learning new things, we can never lift above our titles and command more attention from business. We will fade away into irrelevance.
What if the very tools that we fear can be learned, mastered and utilized to make our designs 10x better for clients? The only way to move past fears and worries is to head right into the storm. Take some time to explore some of the more popular AI bots, tools and filters to expand your design processes and workflows. A few ideas to explore below. (These are just suggestions, there is no need to do all of them!)
Generate a few images using MidJourney.
(The learning curve is high with this one, but this will help you work through the process of writing prompts). UPDATE: Even in the two days this article was being written MidJourney decided to suspend all free trials, (more about that here), because they feared abuse by people using the software. Things in this space change in a matter of days. Also, make sure to avoid using images generated on MidJourney for commercial use for now, until they sort out their legal issues and ownership issues.Use Chat GPT to generate a few prompts for design projects for your portfolio.
Use Adobe Photoshop’s Neural Filter tools introduced in 2020 to edit and modify your images.
Sign up for Adobe’s FireFly Beta (requires an Adobe login) and check out its tools when you are accepted.
Try out Canva’s Text to Image Tool to explore writing prompts for AI generated images.
Paywalls are coming for you, sadly.
MidJourney currently lets you create 25 images for free and then charges a fee for use after that, same for its competitor Open AI’s DALL-E. Chat GPT is moving out of beta and will eventually be a subscription or per use fee. Adobe Firefly will most likely require a paid monthly Adobe Creative Cloud subscription, or worse yet, a separate subscription on top of Adobe Creative Cloud monthly fees (not sure what they plan yet as it is still in beta). I just surpassed my 25 free MidJourney images and would rather not have to pay the $30 a month subscription to generate more images, although, I had so much fun I am tempted.
It takes a lot of money to produce such advanced tools and these companies plan to make it back and more. This is a reality designers have been facing for decades now, pay to play.
The reality of AI is hitting creatives hard.
I came across this Reddit post about a 3D modeling artist who had their job taken away because of advancements in AI generated content. It is a compelling read and one that gives us a peak into the isolated events of creators struggling for future work.
While opportunity is being created by AI, so is misfortune of those in the wrong place at the wrong time.
Be afraid or adventurous, it’s your choice.
I generated this freaky image below that I created using MidJourney with the prompt “Graphic Designer Fearful of AI Taking Over His Job”.
I am going to pair that with another one I created about a “Happy Graphic Designer With Lots of Rainbow Color Droplets”. That will sure to make your day or in the case of last image below, haunt your dreams.
Before reading on, are you a subscriber to my newsletter yet? If not, you should be!
Also make sure to check out my various design classes and design theory pdf book here on my website. With other 400,000+ design students I would be happy to have you as one if you are not already!
Make sure to leave a comment below on your thoughts and questions about how AI will change/effect the creative industry.
UPDATE #1: So, I was approved for Adobe Firefly beta a few days ago and wow, some the images that were created in mere seconds was shocking, scary and really fun and cool! I will be posting a separate article detailing my experience soon but you can see a few of the letters I created using their text effect AI tool.
I have to admit I got a little addicted to seeing the results of the prompts I typed in, tweaking them a hundred times to find the right result.
It felt a bit like playing the lottery and hoping a great image would just appear in front of me. After this experience, I really come away with more optimism about how AI can bring a newfound excitement for creating those complex ideas in our head to life visually.
As someone who is getting older and has issues with my eyes, neck, wrists and back due to a lot of time in front of the computer, I can have hope that I can continue to create masterpieces and express those complex ideas. I can continue to create them even if it is with the assistance of technology, but with less time doing damage to my body by having to tediously pull together the images by hand.
Some would call this “cheating” but many of us are getting to the age where we just want to find joy in creating something again without the sacrifices to our bodies.
The more I played around with Firefly the more my optimism grew for what AI can do for us. The fear I had even a week earlier is still there, but I have more of a cautious excitement for the future of AI in the creative industry.
Is Firefly perfect? No, it is still in beta and since it uses approved photos to source its data that library or image pool will be much smaller.
As mentioned earlier, it will mean it will struggle creating the detail and accuracy of what some of the larger AI tools are doing. It still has a hard time with faces, eyes, and hands (like most AI tools still do). I found more luck generating animals, textures and nature imagery than trying to generate human imagery. It might be wise to start off with those items when you first start creating your first AI generated images.
Make sure to leave a comment below on your thoughts and questions about how AI will change/effect the creative industry.
UPDATE #2
Yes, another update!
Adobe Photoshop this Fall (2023) came out with a new AI update to their software called the Generative Fill and Generative Expand tools.
I put in around 100 hours with it and even created new lessons you can watch as updates to my course on Udemy Graphic Design Masterclass: Learn GREAT Design or as a Mini class on Skillshare.
You can watch the first lesson on Generative Fill Tool for Adobe Photoshop Below.
I think it is a glimpse into how AI will affect us in the short term. Adobe currently has a similar tool for vector in Adobe Illustrator that is currently in Beta. I have not been as impressed with the Adobe Illustrator version but once it is out of beta I will spend further time with it.
Happy to hear everyones thoughts on AI or anything else related to this topic, or your experience with it.
ALERT! NEW LESSONS NOW LIVE! Artificial intelligence tools are changing the entire creative space. Adobe Photoshop has come out with an amazing AI tool that absolutely blows my mind.
This new generative fill tool gives us insight into the power of combining AI and creativity. In these new lessons we will talk about this new game changing feature added to Adobe photoshops latest software version.
How to Watch
Free Course updates on Udemy course Graphic Design Masterclass Learn GREAT design
Now Section 10
Free if you are a prior student but if not part of the class. never pay full price, grab a course coupon here https://lindsaymarsh.myportfolio.com/udemy-discount-coupons
New Class on Skillshare
Master The Adobe Photoshop Artificial Intelligence Tool - Generative Fill
Class link: https://skl.sh/45wsnKS
We will get a get a handle on the basics of generative fill and generative expand tools in Adobe Photoshop by doing basic projects.
We will get to change somebody’s clothes and hair style within seconds as well as create heart shaped clouds. We will get a chance to understand how the tool works by writing effective prompts, creating the right selections and more. There will be a few student projects along the way that you get to try your hand at including a body swap on a fox and creating a mythical dinosaur creature So, let’s learn this amazing new Photoshop AI tool together so we can be on the forefront of emerging technological advancements and upgrade our creative workflows.