First Lesson: Nobody reads the documentation! It‘s true, everyone praises the documentation, but almost no one reads it.
Many people read only the examples (which they expect to be very specific to their situation!).
Others complain about missing components that are present: “Why doesn‘t the JATS Tag Library have an index?” (Answer: It does.)
Other read deep meaning into the simplest phrases.
Second Lesson: XML is not self-documenting, or “Why the Apple technique of no documentation does not work”
Nobody uses the Index
Assumptions are made based on element/attribute names
To find the structure/semantics you want
scan the list of elements
pick one that sounds like what you need
if element allowed in your context, you‘re golden
There are many and varied reasons why semantic guessing might not work
Names obscure (<csd> - as part of an address)
Definitions do not adequately disambiguate (<pub-no> and <doc-no>)
There is no match, you are looking for best approximation(<surname> and <given-names>)
To misquote Liam Quin only slightly: “ Documentation failures have been attributed to users assuming that they understood the content of an element when (perhaps) it had a misleading name.”
Nobody reads the definitions
<on-behalf-of>
JATS <supplementary-material>
Inconvenient Truth #3: — Creation, maintenance and governance are heavy loads. We all realize that it has to be somebody‘s responsibility to write and maintain documentation. But I have a day job, don‘t you?
Inconvenient Truth #4: — Documentation costs money
If a schema takes 2 weeks to write and test the documentation/samples will take 3-5 months to write and proof
Tag sets change over time, if they are being used
Documentation is harder to keep in sync than to create
Did anybody budget for maintaining this?
When should documentation be performed? — The only wrong answer: After everyone who created the document grammar has gone
Inconvenient Truth #5: How do you measure quality? — or, to put it another way: “What is good documentation?”
There are some vague general rules that apply to documenting a tag set:
Don‘t describe processing, nothing is as ephemeral But that is the first thing people ask, “what does it do?”
Don‘t describe format, unless that is the point (tables)
But that does not really answer the question. How do you measure good documentation? Good documentation may expose vocabulary flaws, but... the reverse does not hold. Programs can be tested, documentation not so much. Usability testing can help, with real users in real situations. So get your users involved!