Post by Paul OldfieldHowever, a topic that never seemed to reach consensus,
or even be given a satisfactory answer, IMHO, was the
question of what problem or problems traceability was
trying to solve.
The main problem(s) it addresses are:
1. Change Impact Analysis
-------------------------
This is the main/biggest reason (tho not the most frequently cited).
Trace from requirements to all artifacts and back to assess
the impact and scope of change and estimate its effort. Used to
tell which sets of artifacts might be impacted, and for large
systems, to identify which sets of people need to assess the
impacts to which sets of artifacts.
Also ... its not just about analysing impact to the
code. It can also be about analyzing the impact to other
requirements. The customer may not realize that making a
particular request implies changes to existing functionality
of other use-cases. And before charging forward to make those
changes, it would be good to make the customer aware of those
things, and ask if they are what she really wants?
This can also be a VERY effective and persuasive technique for
negotiating down the scope of the project and for breaking down
some "stories" into smaller stories with separate priorities
on them.
Regarding impact to the code, you asked: why can't I just go to
the developer and say "how long do you think this will take you?"
* If you are a large project with teams of teams,
how do you know which team to go to to ask a developer
to estimate it?
If I have a team per component/subsystem and several of those
in the overall system, I need to do some amount of analysis
to first determine which components are impacted before I
can even find the right team from which to ask someone to
estimate the impact to their component(s).
* If you are a single team, and you don't do collective ownership
and instead do a very strict form of code-ownership, it is
commonplace to have a single "code owner" per class. That
person is the primary person qualified to estimate the impact
of changes to code that they own. Some initial analysis
needs to happen to determine which classes are impacted.
Ideally, I can just get the team together in a room, and
they have a nice "agile"/domain model that doesn't try to model
too much, and without much extra help they can very quickly
determine as a group which classes are impacted and possibly
by how much.
But if I don't have a lightweight model and I instead try to
model everything, then it can be easy to try and rely on the
model to answer that question, instead of the "essential"
knowledge of the model captured in people's heads
* If I'm waterfall based, and "large" then even if I don't have
a team per "subsystem/component", I might still very likely
have separate groups of people who do QA/Testing, and who
do builds/integration/CM, and who do release engineering.
So impact analysis might be needed to decide the level and
type of integration and testing warranted for a particular
change before the effort to do it could even be estimated.
[NOTE: The more I compartmentalize knowledge either across
the lifecycle or across the architecture, the more
I create isolated domains of specialized knowledge
and generalized ignorance (they know "their part of
the puzzle" but not how it fits into "the whole" or
further downstream).
]
So you end up needing to "farm" out the analysis to these
separate groups, and they use additional artifacts and
tools to make up for their gaps in knowledge about the
"whole shmeer" (much easier than conversing :-)
* If I have third-party/vendor products that I have to integrate
with my code, or even modify the source of, and if I have
subcontracted organizations I have to work with, I have a
contractual need to be able to determine when/if a request
impacts them so I can get their assessment/estimate and then
know how it impacts my integration/value-added-enhancement
efforts.
This is a very real problem a lot of folks have to face,
and it seems to be increasing as the trend for outsourcing
increases for farming out parts of a system (rather than the
whole thing), and ESPECIALLY for farming out parts of the
system lifecycle (e.g., the architects are in my group,
the programmers are outsourced from another organization,
the testers outsourced from yet another organization,
the integrators/packagers are in my organization, and the
business analysts are external consultants :-)
[NOTE: So once again we run into Conway's law of "architecture
follows organization". Fragmenting activities and
responsibilities to separate groups of people in separate
parts of the architecture and/or lifecycle fragments
communication and collaboration at the systems level and
impedes systems thinking and optimizing across the whole
instead of for the individual parts (especially when
the individual parts are entirely separate organizations)
]
* At a less grandiose level, the kind of traceability that the
version-control tool already knows how to provide (which I
think even agile teams typically make use of) help us answer
questions like:
- who made that change?
- who has this file checked out?
- when did that change get integrated/merged?
These questions typically help us identify an individual that
we wish to collaborate with and reduce the time spent gathering
and analyzing the information needed to identify who that
individual is. They might help resolve checkout contention/locking
issues.
* And in the "old days", one needed to know UP FRONT which items
to ask to be checked-out of the library by the
librarian. Because people didn't do simultaneous update then
(and few tools supported it), and the human (as opposed to
executable software) librarian had to see the set of items
you wanted to checkout in advance, to make sure they were
all available, and if you still wanted the remainder if any
of them weren't.
2. Product Conformance
----------------------
Functional configuration auditing and physical configuration
auditing, which requires mapping of test results to requirements
and auditing to ensure the product does what it says and the
requirements say what it does. This would come "automatically"
if I already had traceability from requirements thru to all
artifacts (code+design+test).
People who usually feel traceability adds no value whatsoever
seem to most frequently cite this problem/purpose as the goal
of traceability, and not the "impact analysis" reason. I think
the reason is that impact analysis actually ends up being more
about communication and coordination, and is less of an issue
for a single, small, colocated team then for a large project
with many teams distributed across:
- time (different parts of the lifecycle)
- space (different physical locations/timezones)
- functionality (different parts of the architecture)
- organization (different companies)
3. Project Accounting
---------------------
Configuration status accounting, Who did what, when they did
it, where they did it, how they did it (and why?). Think of
"change" as the currency or "units" of accounting. Project
and program management want to be able to report status of
requested features/fixes/enhancements across one or more of:
- multiple products or product-lines
- multiple customers/markets
- multiple releases being concurrently developed/maintained
- multiple sites/install-bases supported and serviced in the field
- multiple variants (custom variations in functionality,
technology platform, execution/operating environment for
differently levels of support/service/funding agreements)
This kind of traceability is typically provided with a
change/request tracking tool that supports hierarchical
(parent->child) relationships between requests, and changes,
and the builds/codelines/releases they are delivered in. It
may also often require integration with the version control
tool for the build/codeline related status information.
4. Process Compliance
---------------------
Ensure that proper regulations/practices were followed by
being able to provide objective evidence to those who will be
auditing the project. This involves things like:
* Ensuring no unauthorized changes were made.
* Ensuring various in-process data was collected, and is accurate,
and is actually being used/applied (e.g., ensure that inspections
are taking place for appropriate "kinds" of changes and that the
results are logged, and the data is used for quality improvement)
* Ensuring that maintenance/repair records are kept and can be
reported and include ALL reported instances of faults/defects and
not just those that were found by customers/users.
* Showing objective evidence for justification of various project
and product cost/change accounting decisions and that the
decision were made by agreed upon means and persons using
appropriately agreed upon criteria and standards. For example;
- what was the reason for not fixing this bug in this particular
release, but not in this other release?
- why did you defer implementation of this feature to the next
release/iteration instead of the current one?
- why did this particular change get built only for variants
A & B but not for variants C, D, and E of the product-line?)
5. Mandatory Business Obligation
--------------------------------
It might be mandated by a customer, or a contractor you
are subcontracting for. It might be mandated by government
regulations for safety or health or fiduciary responsibility.
It might be mandated by industry standards, such that it isn't
strictly required, but if you want o be able to compete in a
particular market, its sufficiently harder to do so without a
particular certification or accreditation that asks for such
traceability (among other things).
Ways of doing "Lean" Traceability
=================================
So if you end up needing it for any of these reasons (many
of which have to do with inherent complexities introduced
due to levels of scale for multiple teams, multiple customer
bases, multiple "owners", multiple organizations, multiple
projects/releases, multiple products/variants, etc.) how
can I make it as "agile" or "lean" as possible?
* Find out what kind(s) of traceability are being requested/mandated
ask "the value" question" several times ("if I give you that,
what does it give you that you don't have without it?"), and
define the actual problem that needs to be solved so you can
set and manage expectations
* Question if it is really the most suitable means if you are
involving the customer and QA early on and collaborating
closely and working in short, hyperfrequent iterations. See
if something less will suffice - maybe even suggest they
first try just one iteration without it first, to evaluate
its perceived necessity.
* If conformance is the only reason, do comprehensive testing and
make sure you know which tests correspond to which stories
(see other ideas for how to do this)
* Prioritize it. Treat it like a story. Have the customer prioritize
it against other stories. Maybe prioritize it as a one time cost
of automating it for everything and then maintaining it? Or if
doing manually, can prioritize and estimate it per story
* Use a change/request tracking tool
* Leverage your existing Version-Control tool
* Track at the right level/scope (feature/use-case/story
instead of individual reqts; and class/module/file instead
of individual methods or subroutines) or even higher-level
if possible (e.g. component-level instead of class-level).
Track at the coarsest-grained level you can get away with
and still meet the traceability 'requirements' and/or
add value to your impact analysis
* Use encapsulation and modularity (hierarchy) to localize
impact not just to code, but also for requirements. Maybe
even structure and refactor your requirements across
stories/features and track at the feature/story-level.
Make sure each "change" task corresponds to a story,
and if it doesn't, raise the issue to the project manager
or the customer and ask them which one, or else have
it separately estimated and prioritized (and break down
big stories into smaller ones with distinct priorities)
* Minimize artifacts (duh!)
* Use Dynamic traceability instead of Static traceability - see
Jane Huang's posting, also some of her research on traceability
and EBT -- event-based traceability, at:
+ http://re.cs.depaul.edu/publications.html
+ http://re.cs.depaul.edu/projects.html
+ http://icse.cs.iastate.edu/sabre/components.html
* If all else fails, use a traceability tool like DOORs or ReqPro.
Do everything in your power to avoid having to do it manually
without any automated assistance.
(Paul, If I missed any of your questions, let me know :-)
--
Brad Appleton <***@bradapp.net> www.bradapp.net
Software CM Patterns (www.scmpatterns.com)
Effective Teamwork, Practical Integration
"And miles to go before I sleep." -- Robert Frost
For more information about AM, visit the Agile Modeling Home Page at www.agilemodeling.com
--^----------------------------------------------------------------
This email was sent to: gcma-***@gmane.org
EASY UNSUBSCRIBE click here: http://topica.com/u/?bUrKDA.bWnbtk.Z2NtYS1h
Or send an email to: agilemodeling-***@topica.com
For Topica's complete suite of email marketing solutions visit:
http://www.topica.com/?p=TEXFOOTER
--^----------------------------------------------------------------