Search icon Looking for something?

Conference Recap: DITA 2007 East
2008, Q1 (June 18, 2008)
Meeting Recap
  • What: DITA 2007 East Conference
  • When: October 4-6, 2007

Michael Harvey
Michael Harvey
by Michael Harvey, Carolina Chapter President and Associate Fellow

At the STC Carolina meeting in September, I won admission to the DITA 2007 East Conference in Raleigh on October 4-6, 2007. Over 100 DITA users from all over North America assembled at the McKimmon Conference Center to network and to attend sessions created “to leverage the power of the Darwin Information Typing Architecture OASIS Standard.” On Thursday and Friday, sessions began at 8:00 a.m. and ended at 6:15 p.m. On Saturday, sessions started at 8:55 a.m. and ended around 2:30 p.m.

The keynote session on Thursday was entitled “Understanding and Communicating the Financial Impact of XML and DITA” by Amber Swope of JustSystems. Amber said that if you plan to translate documentation, the financial argument for implementing XML is straightforward. DITA allows you to focus on a single, money-saving way to describe tasks, concepts, and references and apply it as needed. The DITA Standard is supported by OASIS (Organization for the Advancement of Structured Information Standards), a not-for-profit consortium that drives the development, convergence, and adoptions of open standards. Diverse companies from all over the world are producing content that they conceivably could share with each other. Implementing DITA in a publishing workflow system reduces workload and costs in each step of the flow. DITA can identify changed content precisely, and so you can save 40-70% in costs associated with information organization and revision. Amber provided metrics to back up her argument.

I heard Tim Grantham of Thermo Fisher Scientific speak about “Agile Content Development” (ACD). Thermo Fisher builds specialized scientific equipment such as mass spectrometers and has $9B in annual sales, 30K employees, and 350K customers. Tim described an iterative content development methodology based on a periodic publishing model. He compared the methodology to what Charles Dickens did: Dickens had a general idea of how his story would go; he published a chapter at a time, got feedback from readers, and changed the story in response to the feedback. With ACD, you produce a slice of the product at a time: code, test, document, debug, iterate. Your content management system is critical to the success of the process, so you must approve topics, author topics, edit topics, and validate topics. DITA facilitates automatic measurement of quality across the set of topics.

After lunch on Thursday, Bernard Aschwanden of Bright Path Solutions talked about “What Authors Need to Understand about DITA Authoring.” His key points were as follows:
  • Understand the end goal — why are you repurposing content?
  • DITA changes how you need to write
  • DITA is not going to solve all of your problems
  • Get familiar with the entire content development process, from topic creation to publishing
  • DITA is a data model that can be put to good use — a tool like any other
  • Tools can make your life easier
  • The more training you get, the easier it is
  • Learn enough to challenge the vendors and ask them questions
  • Learn from others
  • Learn the difference between data and metadata
  • Learn topic-based authoring
  • Teach consistency between authors to facilitate content reuse
  • Implement minimalism
  • Learn how to work within the rules and try not to specialize
  • Write for reuse — chunk and be agnostic
  • Reuse is wonderful, but you have to be able to find the topics
  • Avoid dependencies between files and inside files
  • Understand DITA maps
  • Spend time planning your map architecture, then write topics
  • Separate topic writing form assembling topics into a map
  • Relationship tables can help when it comes to related links
  • Do a trail run with real documentation

Ghada Captan and Julie Waxgiser of Thomson Financial gave a presentation about “Moving from Narrative to Topic-based Writing.” Their readers are traders on the stock exchange who require quick access to detailed, accurate, and specialized information and senior bankers who don’t need as much detail but expect reliable and timely reports. Captan and Waxgiser explained how they implemented their DITA-based system. Their first step was to build an information model and then thoroughly inventory existing content, identifying common types. They established a content model for each information type, selected a prototype project, and began moving information into the system. Theirs was an iterative process, allowing for feedback and fine-tuning along the way. Their handout showed a relevant example: a problem alert about estimated tax payments that may be lost for 13 states. The contrast between the narrative style and the structured format was striking — irrelevant content was eliminated, the organization was clearer, and the required action was more crisply provided.

Robert Anderson of IBM gave a high-level but technically dazzling presentation about “Installing the DITA Open ToolKit: What Every DITA User Needs to Know.” He showed where to go on the web to download the toolkit, what pitfalls to avoid, and how to install it without hassles. Emphasizing the importance of starting with an information map, he gave a demo using Arbortext to edit the XML code and set things up quickly.

After a break, W. Eliot Kimber of Really Strategies explained “How DITA Could Be Useful to Publishers.” There are similarities between publishing business and technical documentation organizations ~~ there is scarce skilled labor, pressure to reduce cycle time and to revise and repurpose, increasing use of non-text media, and competition with community-created content (for example, blogs). DITA is a viable solution for modular information when you can compromise typographic perfection. Consider travel guides ~~ they are inherently modular and time-sensitive. DITA makes it easy to buy and sell information assets. The standards for interchange actually relax — you don’t need a full publishing infrastructure to supply high-quality content.

Friday, the keynote presentation was titled “Migrating to DITA: Lessons Learned” by Don Bridges of Data Conversion Labs. He began by emphasizing “two naked realities": the best data conversion is the one you avoid, and only clairvoyants author content with the awareness that it will ever be converted in the future. Based on his company's experience with over 15 DITA projects and 500 XML migration projects, he provided the following lessons:
  • You don’t know what you don’t know — go out an learn as much about XML and DITA as you can before you start
  • Set expectations accurately
  • Calculate ROI honestly
  • Incorporate the technology that addresses your requirement — too many forget that the requirements should drive the solution, not the other way around
  • Get users involved in the decision and implementation process
  • Get IT involved early
  • Be sure your process scales
  • Understand and manage internal resistance to change
  • Get cleaned up — harmonize similar content to leverage reuse
  • Find a guide who has been there before — follow her or his advice

After a break, Jim Early of Flatirons Solutions Corporation spoke about “The Future of DITA.” The current business landscape emphasizes standardized content models that are reliable and extensible. There is a new mandate for collaboration, whether across geographically-distributed teams or across different organizations at the same site. Content management systems have become critical elements of the IT infrastructure as they enable more efficient workflow and getting the right information to the right people at the right time. Flatirons is seeing a lot of interest in migrating from other XML standards to DITA in the finance, medical, and aerospace industries. DITA will need to grow, moving content models beyond software and hardware technical publications constraints. Jim posed the following ideas that should be pursued in the coming years:
  • Create a more generalized “base” topic that more easily permits specialized content models that currently don’t fit into the DITA topic model
  • Open content models to allow more flexibility in specialization
  • Enable content model extensions

France Baril of IXIASOFT talked about “Reuse Strategies and the Mechanisms That Support Them.” They were as follows:
  • Support multiple output types: DTD > XML > XSLT > various output steams — she went into specifics about how to achieve web displays, displays on a handheld device, PDF, and so on
  • Topic reuse in different projects — do not have topics depend on one another; they must stand alone
  • Conditional text and filtering — there are three basic filtering attributes: audience, product, and platform. Don’t define conditions as you go
  • Systematic conditional text and filtering — use a rich set of semantic tags, not conditional attributes on the same set of tags.
  • Content references (conref) — is a dangerous strategy that should be avoided; keep reusable fragments in separate files
  • Variable for words and phrases — but don’t go overboard
  • Automatic content creation

Glenn Emerson of Xerox Services shared his war story about implementing “DITA in a Mixed Environment.” His team is responsible for over 175 deliverables encompassing reference documentation, service documentation and training material, sales training material, and eLearning for multiple audiences. They had unstructured FrameMaker files and legacy SGML that they converted into DITA-based XML. The SGML content was easier to convert because it was already structured but the rest of it wasn’t as easy. It was definitely a cautionary tale.

Robert Kimm of Medtronic explained “How Medtronic Added Value for Customers through DITA-Enabled Multi-Channel Communications.” Kimm and his four and a half person team document implantable heart devices, patient management tools, and so on. They had seven deliverables ~~ 850-1000 pages to start with and the content had to be produced in multiple outputs. His is a regulated industry ~~ his documentation has to be approved by the FDA. He stepped through the process of implementing DITA-based XML documentation using XMetaL as an authoring tool and the DITA Open Toolkit. The initial information typing was critical. The entire process, from proposal to conversion, took 18 months.

At the last session that I attended on Friday, Joe Gollner of Stilo International talked about “Accelerating DITA.” Gollner’s company deals with the aerospace industry and the S100D standard for managing technical content. One hundred years ago, everything that we considered “data” was kept in documents — in ledgers and journals. At some point in the 20th century, data was extracted from documents and relegated to databases. With XML, we’re at a point were we are reintegrating data and documentation. To “accelerate DITA,” content management systems should permit precise planning, the easy migration of legacy content, and smooth movement into production. Gollner shared his ideas about the best information flow to accomplish this.

Saturday morning, Robert Anderson gave an online demonstration about how one can implement DITA “Specialization from Scratch.” In real time, he took base types from the DITA Open Toolkit and created specializations, showing the parent/child relationships between types.

One of my former colleagues from EMC, Paul Masalsky, presented “Enterprise XML Authoring with EMC Documentation’s Technical Publications Solution.” He showed us how easy it was to manage information workflow and re-brand content with the solution. For more information about EMC's solution, visit the EMC press release page.

It was an informative conference, and I am grateful to have had a chance to attend.

DITA 2007 conference logo

Michael can be reached at president at stc-carolina dot org. End of article.

More articles like this...
Comments powered by Disqus.