Puzzle Development

From BAPHL Wiki
Revision as of 16:14, 26 July 2014 by Tortoise (talk | contribs) (Suggestions)
Jump to: navigation, search

Steps

  1. Write meta
  2. Testsolve meta (at least internally)
  3. Hand out puzzle answers to constructors
  4. Constructors write puzzles
  5. Testsolve puzzles
  6. Revise puzzles based on testsolve feedback
  7. Repeat from Step 5

Suggestions

  • Get many puzzle-writers on board early; be prepared to write more puzzles than are needed, so that there are back-ups and the best-of-the-best can be selected for revision.
  • Be sure to distinguish constructors from editors; people can do both, but should not edit their own puzzles. Corollary: have at least two editors if they want to construct, or the editor should at least be open to criticism from others on his own constructions.
  • Distinguish between internal and external test-solving. Internal test-solving can be useful, especially for things like checking whether a meta idea is solvable before going down the rabbit hole of finalizing the answers and writing individual puzzles, but you want external feedback on individual puzzles and the hunt as a whole for a greater diversity of opinion.
    • Where do you find external testsolvers? Lots of places: MIT Mystery Hunt teams, the National Puzzlers' League, BAPHLers who can't make it on the date of the event... (the latter are especially good for a dry run)
    • Solicit testsolvers with a variety of skill levels and areas of expertise. Maybe some testsolvers get horribly stuck on your sudokakuro variant, but you can expect that most teams will have someone who can tackle the logic puzzles.
    • Get testsolving feedback from both solo solvers and groups. Maybe you’ll find that a puzzle requires too much multitasking to solve alone, but another one is impractical for multiple solvers to crowd around.
    • You will go through multiple rounds of external testsolving. You need fresh testsolvers for each round.
  • Check for redundant clues or phrases or mechanics across multiple puzzles. Several editors should test-solve and review every puzzle in full.
    • Check for diversity/distribution of difficulty as well. While it would be nice for all puzzles to take roughly the same amount of time, so as not to penalize people who choose to tackle the hard ones first, the fact is that some puzzles will be more difficult than others, but you don’t want too wide a spread.
      • And try to spread out the hard ones if you can. In BAPHL 10, we inadvertently put the hardest puzzle and two of the puzzles requiring the most information-gathering at the same location (Fort Worth). In our case, we couldn’t have moved some of the puzzles to different rounds, but we should have worked more on making one or two of them shorter.
  • When reviewing or test-solving, solve puzzles completely to find any errata; do not just solve to get the answer.
  • Bounce puzzle ideas off editors and other writers before beginning first draft, so untenable ideas can avoid becoming time sinks.
    • Try a dummy draft to see if your construction is feasible.
    • Beware the “I’ve wanted to write this particular puzzle for a while” puzzles, which are especially common with less experienced writers. Their idea may be good in a vacuum, but for a hunt such as BAPHL, you will find that puzzle ideas that arise organically from their context (theme and scale of hunt, provided answer word, site-specific information, etc.) will fit the hunt more readily than ideas which were formed out of context.
      • On the other hand, if you’ve had lots of potential puzzle ideas floating in your head for a while, picking one that happens to fit the context well may work.
  • Expect each puzzle to go through the write-test-revise cycle multiple times. Accordingly, writers should complete first drafts well in advance, because each testsolve and revision will take time to complete.
  • For testsolving, make sure you ask for specific feedback, as well as general impressions. Solving time (including number of people), perceived difficulty level, enjoyment, and potential errors should be among the feedback you solicit from your testsolvers.