Following some lovely feedback from other authors, I had another look at the model. You might be tearing your hair out by this stage but this is how the sausage is made. Better to notice the cow is still kicking before stuffing it into the mincing machine.
The feedback made me think more critically about the core objectives, and the simplest way to achieve them. The simpler the model, the easier it can be executed.
When writing, an author must always consider the reader’s perspective, and the same applies to writing contests. Contests primarily exist for the benefit of prospective readers, to efficiently guide them toward the best books. Let’s recap the main problems facing potential readers who would like to try more indie novels.
Supply of books swamps demand, though at the same time readers are hungry for great stories. How could this paradox be true? I see the selection of books on Amazon as an enormous feast, but the offering is a jumbled heap that mixes sushi, donuts and every other imaginable dish, ranging in quality ranges from Michelin star restaurant to greasy spoon diner. Only the hungriest diner will dig through this steaming pile for a chance of finding one delicious morsel.
Existing mechanisms that point readers toward the best books fail ever harder as the book pile grows larger. The star rating system is meaningless. Goodreads reviews are ungated and easily gamed. Amazon reviews require so many hurdles that they are too rare. The Amazon front page is a chicken and egg game which mostly rewards established sellers and heavy advertisers. Existing writing competitions that are not irrelevant are soon captured by manipulators.
The desires of potential indie readers are simple. They want a convenient place where they can browse a suitably condensed short list of higher quality books and save time filtering the mountain of books themselves. They want a simple way to find books which cater to their particular, peculiar tastes.
With these reader needs in mind, I propose a simplified Keystone Award.
Step 1.
Entrants fill a form with:
Title
Cover image
Blurb
Novel manuscript
Subgenre (long list of checkboxes, select all relevant)
Tone (hard/soft sci fi, dark/cozy tone etc, on 1-10 scale)
Step 2.
Admin splits entrants into subgroups of 5-10 entries with compatible subgenre/tone.
Step 3.
Entrants receive whole manuscripts of other books in their subgroup.
Entrants evaluate the books by whatever means and criteria they see fit.
Entrants rank books from best to worst (scored 10 to 1 by admin).
Step 4.
Admin publishes the aggregated rank scores in each subgroup.
Optional extras
1. Admin evaluates entrants to exclude low quality books.
2. Entry fee from entrants (to be used on paid advertising for competition).
3. Title/Cover/Blurb competition (browse all books, vote for top 3 in each category).
4. Require all entrants to promote the competition (or get cut).
5. Require entrants post at least one Goodreads review (or get cut).
6. Entrants who submit Goodreads reviews provided extra voting points.
7. Preferential admission to future contests for authors that reviewed past entrants.
8. Coordinated period of discounting/giving away ebooks when ranks announced. This could be completely voluntary. Functions like a free bookbub deal.
This approach is solely focused on collectively ranking best to worst books in subgenre groups to help prospective readers locate higher quality books. The full aggregate scores would indicate the range in quality between each book in the list. A subgroup where everyone unanimously agrees that book A is the best will serve as a stronger endorsement to one where book A barely outscores book B.
All constructive feedback is lost in this model, but people can find critique partners elsewhere. Entrants that end up with low rank at least get an opportunity to browse books in a similar subgenre for free, and could use that to improve their writing. Announcing only the top three books in each subgenre group is an option if entrants feel the risk of a low rank score discourages them from participating.
If admin filters the books for basic quality, then even entrants at the bottom of the ranks still get a small mark of approval. Based on my experience surveying the title/cover/blurb/sample chapter of 188 entries in the SPSFC4 I believe I could handle this prefiltering workload solo to set a subjective baseline for quality. I would have excluded only a handful from SPSFC4. Admin rejections could be kept private.
Rather than shoehorning entries into other subgroups to meet minimum group size requirements, the entries in an undersized subgroup could be held over to the next round of the competition until more compatible entries accumulate. The accumulating entrants of subgroups could be made public so that other authors could use this to decide if they want to enter. Group size could be rigidly set this way (7 is a nice number) and judging could begin as soon as enough entries qualify.
This way, the competition doesn’t need to be run simultaneously, though releasing all the results at the same time might make it easier to generate a promotional buzz. Alternatively, a website/social media profile which regularly releases a few ranked lists of different subgenres could generate consistent traffic and might encourage more readers to try books outside their usual tastes. A few subgenre groups running every month would be more sustainable to manage than dozens all running at the same time. Staggering judging groups also gives admin more capacity to refine the model over time versus a single big annual cycle. If the contest grows bigger, managing the different subgenres could be farmed off to additional admins. The competition could expand this way to cover all indie genres, not just sci-fi.
Entrants might dislike the idea of other authors in their subgroup being given complete freedom to decide their ranking of other entries. In response I would highlight the lack of transparency of judging in current contests like SPSFC and SPFBO where we must trust that the admins properly screen applicants and oversee their judgement process. Often getting through the first round of SPSFC depends on the luck of having your subgenre end up in a pool of judges who enjoy it (I was cut in round 1 in SPSFC3 but was on track to survive round 1 in SPSFC4 since my second judging group had a lot of biology fans). I believe the keystone awards would be vastly superior in this respect.
Judging is an act of discrimination (in the virtuous sense of the word) and can never be impartial since partiality is the point of the exercise. A book competition seeks to restrain prejudice (the act of judging a book by its cover, or worse, by its author).
Gaming this system would be costly but not impossible. An author could get their author friends to select the same subgenre to be put in the same subgroup, then agree to all vote for one of them to “win” the ranking. Larger subgroups could also make this strategy harder to pull off. This might become more of an issue if winning a high rank starts to have more monetary value. A segmented, continuous system should be able to deal with such issues as they arise better than a large annual cycle.
Happy for feedback on this streamlined model. The logistical simplicity means I could launch it within the next month, ideally on a small test run first with a single subgenre group.
Any suggestions of a big enough SFF subgenre to easily gather 5-10 authors for a first try? Email me at Shane.simonsen@icloud.com if you want to nominate your book for consideration, complete with a description of the subgenres it falls into, though I will need to follow up with a more detailed application form later.
I like this streamlined format. You could always have a more open “post-season” where critiques are swapped after the competition, either formally like you had set up before or informally in some kind of chat. I also think the idea of regular mini-cycles is good, perhaps still on a set schedule like monthly, though I think the first 1 or 2 should be big annual style ones to give enough inertia to the project.
This is great. I would be willing to volunteer time to help establish a contest format like this, especially if contests covering many genres were a real goal. To me, it seems like a quality-evaluating format that worked and gathered street cred for its value would fill a gaping hole in lit scene.
Might there be ways to introduce a supplement mechanism aiming for real exclusivity... a top prize only awarded by exclusive comittee to books of supreme quality, on no particular time schedule?
- goucampo