99designs is a global design platform that allows users in need of graphic design deliverables to connect with freelance designers from around the world. This project will deal specifically with the design contest service, wherein a user writes a brief, chooses some parameters, and launches the contest.
Once the contest is launched, designers read the brief and can choose to enter the contest. Over the course of 2 rounds spread across roughly a week, designers can submit concepts and users can provide feedback. Revisions are made and the cycle continues until eventually, the user awards a winner and receives the copyright to the winning entry.
Users often seek refunds for under-performing design contests, but are unaware of their role in communication and collaboration with each designer. For example, users who engage with their designers by using feedback tools such as declining entries, providing star ratings and comments, typically request refunds 22% less often than users who do not engage in feedback.

I conducted interviews with 20 users who requested refunds while running their first design contest. My aim was to identify patterns of behavior that may have led to the dissatisfaction with the designs in each contest. Over the course of these interviews, 3 main behavior patterns kept coming up again and again. Below, I've stratified the 3 most common behaviors into personas to help better illustrate them.

Steven launches his contest without much expectation and is not very motivated to participate in the creative process. He doesn't see himself as particularly creative, and so he doesn't want to interfere with the designs' creative process. But he is keenly aware of the refund policy, and so running a contest is relatively low risk.
Steven uses the "Set It and Forget It" approach. He launches his contest and does not interact with the entries at all. He returns after the Qualifying Round concludes to review the submissions he's received. He finds his contest flooded with generic/low quality designs and confirms his suspicions that this the ceiling of what is possible and promptly requests a refund.

Julie was told about design contests by a colleague whom she trusts and is excited to see what sort of designs she will get. Julie implements the "Golden Calf" approach. She launches her contest and over the first couple days doesn't really see anything she likes.
On Day 3 of the Qualifying Round, she receives a design she's really impressed by. She rates this design 5 stars and leaves the rest unrated. Julie doesn't realize that each designers can see the ratings and so she uses the 5 star rating to bookmark her favorite design.
By the end of the Qualifying Round, Julie is disheartened to find that all of the new designs that have come in are obviously copying the 5 star design. Designers are now beginning to accuse one another of copyright infringement. Unimpressed with the lack of original concepts and overwhelmed by the accusations of cheating, she requests a refund.

Abigail found out about design contests on her own while researching how to get her logo created. She's enthusiastic about the process and can't wait to get started.
Abigail implements the "You're All Superstars!" approach. As the first designs come in, she rates them all 5 stars. While she's not particularly excited with the early entries, she wants to encourage each designer. What Abigail doesn't realize is that she's actually signaling that she's found exactly the type of concepts she was looking for.
Over the next few days, experienced designers avoid her contest as it appears there is stiff competition, while less experience designers flood in as it appears their skillset is what Abigail is looking for. At the end of the Qualifying Round, Abigail is disappointed by the quality of designs and requests a refund.
All 3 of the behaviors demonstrated above have one thing in common. Each user misunderstood the importance of provided accurate feedback to their designers. The better the feedback provided to the designers, the more experienced designers that will participate, the higher quality and quantity designs the user can expect to see throughout the contest.
If we can use UX principles to help make providing useful feedback obvious, easy to understand, and rewarding, we will decrease the number of first time users who request refunds on their contests.
First, we need to take a look at the current UI for each individual design entry to see if we can make the optimal behavior or interaction more obvious and intuitive. Below, I compare the old design entry layout with a new version, and detail the changes made.


1. The star ratings and trashcan (decline button) appear clearly visable at the bottom of each entry.
2. There is no further information provided to indicate or imply how best to engage with these tools.
3. Once a rating is selected, the rating appears as dark grey stars against the light grey stars.
1. Small change here. For the decline button, I replaced the trashcan icon with an "X" . This should incentivize interaction by subtly softening the implication of declining a design.
2. I moved the decline button from the far right to the far left. I also added an "award winner" button on the far right. This will help to imply that interacting with designs is a linear process. From declining, rating, and finally to awarding.
3. I added a status bar to the bottom of the entry to help clarify what each rating represents to both user and designer. Statuses will go from "Prospect" to "Contender" and finally, "Top Contender".
4. Both the star ratings and the status bar are now color coded with colors that represent progress(aqua to purple). This should reinforce the linear process of design improvement, left to right, light to dark.
Here are 3 variations on the color scheme for the status bars. Originally I planned to go with V3 as the "heat" analogy seemed to fit nicely. But after some Accessibility testing, V1 and V2 were much easier to differentiate for users who are color blind.


The 'New' view features design entries that have yet to be declined or rated. The status bar for each design is grey and says "Unrated". As soon as the user engages with an unrated design, they will get immediate feedback. If the user declines the entry the design will be removed from the view. If the user rates and entry, the star rating and status bar will change and the entry will move to the bottom of the order, letting the user focus on the remaining unrated designs. Once the user navigates away from this view, the rated designs will be removed and only accessible from the 'Rated' view moving forward.
As I began to prototype the new page layout for design contests, it became apparent that users may need some help at the very beginning of the contest to understand how and why they should engage with the new feedback setup. I decided a quick speech bubble tutorial would do the trick.
The tutorial triggers when the user receives their first entry. The user can click through the tutorial with 2 to 3 clicks and if they ever need to access it again, we can add a like "question mark" icon to the top of each entry that will initiate a replay.


After the client has finishes rating the unrated designs, they can click on the 'Rated' tab along the top. The 'Rated' view will now automatically sort entries from highest to lowest rated. This makes it quick and easy for users to focus their energy on the designs that have the most potential.
As new entries come in, I've added a notification bubble to prompt the user to click back into the 'Unrated' tab to decline, rate and respond. This will establish a cycle of engagement between the user and their designers.
With the minimum viable product designed, it's time to prototype the system so we can see it in action. The prototype gives us a chance to test these ideas and elicit user insights to help us confirm what's working and/or pivot if need be.
Below, I've created a Loom video walkthrough to demonstrate the prototype I put together in Figma. This is more of a guided walkthough and conveys how I would explain this prototype to my team/stakeholders.

Next steps would be to test these changes with users, as well as designers. Then we could take those learnings to either iterate and refine these improvements, or pivot.
I would identify metrics and KPIs to measure the benefits these changes have on the overall user experience. Recommended metrics to monitor would be refund requests percentage against completed contests, and the number of escalations received. Longterm, we could look to see if the percentage of repeat customers goes up in the coming months. Users would not engage with these changes until they've purchased a contest, so there should be no effect on the conversion percentage, but it would be good to keep an eye on that as well.
This project was so much fun to work through. I think there are many more opportunities for evolution when it comes to design contests, but the changes detailed above provide a stable, scalable platform to build upon for the future.
If you made it this far, I want to thank you so much for your time and attention. Cheers!