Please patronize sponsors of this page!
Bytesmiths no longer is involved in software consulting. Maintenance of this web site is currently subsidised by unrelated business activities. Please pass the word to other interested folks, so I can continue to host this page!
- Bytesmiths Editions -- large, archival, fine-art photography on unusual materials
- Bytesmiths Press -- artists' services: web design/hosting, jury slides, giclee reproductions, opening announcements, brochures, etc.
- Champagne Beadworks -- handcrafted jewelry and beadwork
- Crafted By Carol -- handcrafted jewelry and beadwork
- Dharm Atma Yoga -- Kundalini yoga instruction
- EcoReality, an organization devoted to establishing a sustainable ecovillage
- Ecovillage Newsletter -- Diana Leafe Christian's news of her travels.
- Environmental Education Outreach -- providing environmental education worldwide.
- Green Chipper -- light forestry and environmental services.
- Info Ark -- a huge collection of useful information.
- Varalaya Farm -- organic produce and sustainable farming education.
- Veggie Van Gogh -- two artists' mobile warehouse and living quarters, petroleum-free!
- Veggiemog -- life and times of Kelly O'Toole's Unimog, running on biodiesel
Your site could be listed here, for as little as $12 per month! Go to Bytesmiths Press for details.
This site has been selected by PC Webopaedia as one of the best on this topic!
This site has been awarded a Links2Go Key Resource Award in the Smalltalk category!
Originally published in The Smalltalk Report, February 1997.
by Jan Steinman and Barbara Yates
What is it?
We've kept a watchful eye on the evolving Smalltalk tools business
for over a decade now. Ten years ago, it was simple: there were no
tools outside those provided by your Smalltalk vendor, so you built
As Smalltalk left the laboratories of the
"innovators" who were willing to build their own
tools and started being used by "early adopters" for "real work," it
quickly became apparent that the Smalltalk vendors did not have many
of the tools that software developers were used to having, such as
code management, documentation, testing, and metrics toolsets. First
to arrive were code management tools, such as
ENVY /Developer and Team/V .
Now Smalltalk is pushing beyond early adopters into the
"early majority" phase. MIS shops with tens or
hundreds of COBOL programmers are switching to Smalltalk. But they
are finding that code management is just the first part of the
toolset needed for large, corporate-wide projects.
As Object Technology
International (OTI) did for code management with their popular
ENVY/Developer (ED) code management
system, they have now addressed some of these team tool issues with
ENVY/QA (EQA), a suite of software quality assurance tools packaged
in a flexible framework. It consists of five major modules:
- Code Critic, an extensible source code analyzer,
- Code Metrics, an extensible metrics gathering and reporting
- Code Coverage, an execution coverage analyzer,
- Code Publisher, a documentation formatting and exporting
- Code Formatter, a source code formatter (not available for
The suite is available for IBM Smalltalk Professional, IBM
Smalltalk VisualAge Professional, and ParcPlace-Digitalk VisualWorks
with ED installed.
To get the broadest possible impression, we took a three-pronged
approach to reviewing the EQA framework and tools:
- we ran some of the tools on the EQA framework itself,
- we ran them on a public domain framework, and
- we ran them on some of our own tools and frameworks.
In some cases we could compare the results with the output from
our own Toolkit, since there is
some overlap between the two products.
EQA is delivered on CD-ROM in several pieces:
- an ENVY repository,
- numerous formatting files for the documentation publishing
parts of the product, and
- a number of patch files for ED users who have not been
routinely incorporating the released patches nor porting to new
releases that supersede the patches.
The installation of the latest ED release is recommended; barring
that, there is at least one patch that must be
installed prior to using EQA. In our case we were already up-to-date
with the latest ED release, so installation was straightforward. If
you have not been keeping "up to rev" with ED, installation may be
more painful, as the required patch makes you start with a new image.
We installed EQA for VisualWorks 2.5.1 and ran the images on a
Power Macintosh 9500/120, a Power Macintosh 8100/80AV (both running
MacOS 7.5.3), an AST P30 running Windows NT 4.0, and a Cycle 5 (Sun
clone) running Solaris 2.4. We encountered no platform dependencies
or other platform-related problems.
Installation involved importing nine configuration maps from the
EQA library into our main repository, which took about an hour and a
quarter on a lightly loaded server and ethernet. Then we loaded the
top level EQA configuration map with its required maps into our
working image. There are options to load only parts of EQA into your
image by loading separate configuration maps, but we went for the
whole thing at once. Our image grew by about 1.5 MB with the addition
of EQA. The manual contains instructions for unloading various parts
of EQA, but we did not have a chance to verify that it unloads
without a hitch.
As with other tools in the ENVY family of products, some will find
it easy to criticize the user interface. (At one conference
birds-of-a-feather session on ED, a techie described his experience
as "I hated it until I loved it".) We think a lot of criticism stems
from a murky user interface that expects the user to understand the
guts of how ED works in order to make sense of the UI.
In the case of EQA, the user really doesn't have to understand how
the tools do their work in order to grasp the UI, but that does not
rescue ENVY's reputation for difficult user interfaces. One must have
a great memory and incredible patience to cope with the modal dialogs
that are used to customize settings and set up the various tools
prior to running them. We experimented with customizing settings and
saving those settings and reloading them, and will summarize that
experience below as it relates to the major tools.
EQA does not make use of any VisualWorks-specific features from as
ApplicationModel and ApplicationWindow ,
probably in the interest of dialect portability. This means that VW
users will not see the menu bars with pull-down menus that are
provided in the IBM Smalltalk versions of the tools. We would be
happy to live with that limitation if the modal dialog situation were
improved. Those modals cannot be resized and the user cannot change
multiple settings easily. Relief from these tedious dialogs is on the
top of our "improvement opportunity" list.
Some of the functions take a long time to complete, and EQA does
not fork processes to do its measures and reviews. The user gets a
progress dialog allowing him to stop the measure, but otherwise, the
image is out of commission for other work. In our experience, this
keeps tools from being used as often as they should. You might want
to have some other, non-Smalltalk task at hand before starting an EQA
From most ED development browsers, you can select a code element
and use the "tool" submenu (or pull-down menu) to run the Critic,
Metrics, or other tools on that code element. While this level of
integration might be expected, it was a pleasant surprise given the
difficulty we had achieving similar integration with our Toolkit.
Code Critic analyzes code components such as methods, classes,
applications and configuration maps for what the manual calls "common
problems." We recognized a number of these from the book
Style , but strangely, some of the advice that comes from
the criticisms is in direct contradiction to Smalltalk With
Style . Experienced ED users may notice that some Code Critic
measures are things that ED already reports in the Transcript, such
as the infamous "Warning 49" messages.
Some of Code Critic checks include unused arguments, direct access
to instance variables, and use of constants ("magic values"). It
would be nice if Code Critic could be configured to ignore the use of
constants in class methods, since they are often used for
We thought it would be interesting to run Code Critic on the basic
framework classes of HotDraw, a constraint-based drawing framework
from the University of Illinois, Urbana-Champaign (UIUC) archives.
HotDraw is mature and fairly robust, and has been used as a
foundation for commercial applications. We selected our
subapplication called HotDrawFramework and selected
"tool -> review -> application..." from the menu, chose the
"All" radio button in the resulting modal dialog, and let it run.
We chose to use the default settings to see how much "noise" would
be generated in the report. It generates a lot of output that may be
confusing to beginners -- we think that EQA will be of most benefit
to more experienced Smalltalkers who can interpret the results,
modify settings, and filter out the noise as their project sets its
own QA standards.
For the various criticisms, EQA assigns a severity level of 1
(most severe) to 3. There are almost four dozen reviews, with 31 of
these at the method level. Reviews that might be considered "noise"
were things like 'Method: Could be cascaded' with severity 3 and
'Method: Sends System Method' severity 1. For certain "system"
classes (such as the ones EQA complained about for HotDraw), the
sending of "system" methods such as basicNew and
dependents is perfectly justified, but in another
application a brand-new Smalltalker could be using them improperly in
EQA confuses literal symbols with message selectors, so you must
customize this measurement for your application to avoid noise
results. The "missing #yourself" criticism uses some odd assumptions
of when yourself is needed, and gave us some
You can look at a results report and turn off certain results,
saving them in what EQA calls an "ignore set" for use the next time
you run the Critic. When we tried this, we were amazed to find that
the ignore set was saved to a file in a VisualWorks-specific BOSS
format file, and wondered why it wasn't saved in the ENVY library in
a way that would be portable across supported Smalltalk dialects.
There doesn't seem to be a way to view an ignore set that you have
previously saved, nor any way to edit an ignore set. One "cockpit
error" we made was to accidentally turn off a particular review in
the list of problems it would look for, and then we couldn't find a
way to turn the review back on.
Results can be viewed in a window, sent to the printer, and
exported in tab-delimited format for import into a spreadsheet. The
results reports are very easy to read. It is very useful to use the
option to hide "in-range" results to avoid clutter. The "description"
mode in the results window provides crucial information to help you
learn (especially for Code Metrics) what the measure means.
We ran Code Critic on the EQA framework app,
CcFramework , and all of its subapplications. It got a
lot of criticism for "unreferenced classes" (classes not visibly
referenced), but the advice suggests that abstract superclasses
should not be expected to be referenced, so we guess this is an
example of a "noise" result that belongs in an ignore set.
Code Metrics runs "measures" that return numerical results against
code elements. These measures have upper and lower thresholds;
measures outside this range are candidates for further review. As
with Code Critic, Code Metrics is accessible from most development
browsers via a menu selection. There are 43 different static metric
measures, with about half of those applying at the class level.
We ran Code Metrics on the same applications upon which we ran
Code Critic. We found that the descriptions and advice for Code
Metrics require more examination and understanding than the
information from Code Critic. For example, the Lorenz Complexity
measure carries a footnote reference to a complete book in the
manual, and without the book we weren't able to judge the meaning of
that metric result. We thought that the measure gave an unwarranted
high result (which is bad) for a particular class method in HotDraw.
Considerable effort might be needed to make good use of this metric.
Another class metric, "class coupling," appears quite valuable,
yet we found ourselves wanting more detail than the manual provides
about how the coupling was determined. A measure called "method
density" is the ratio of number of statements (as defined by the
compiler) over lines of code. Since method formatting practices will
greatly affect this measure, it is an example of a metric that will
make the most sense in the presence of a project-wide style guide. In
the HotDraw application for which we ran the metrics, a number of
methods were out of the default acceptable range of 1..8 because
their result was less than 1! Perhaps Code Formatter should first be
run before attempting to get meaningful "method density" numbers.
We were surprised that neither Code Critic nor Code Metrics could
be run against non-resident code. We could envision the value of
running EQA against code that was not loadable, or that one didn't
dare load until seeing how bad off it was!
We were also surprised that there were no metrics regarding
configuration management or version history. Since ED has this
information readily accessible, it seemed odd that it was not
exploited -- it would be very useful to have QA measures of how ED is
being used! We did port one of our version history-based metrics with
little trouble, as described below.
The Code Coverage tool "watches" execution during testing and
produces a report of code that was not executed during the test. Such
tools are important in assessing the completeness of test cases. At
the 1995 OOPSLA
workshop on Testing Smalltalk Applications , interest in test
coverage analysis was high, and we believe it will become
increasingly important as Smalltalk enters the mainstream. We are
happy to see OTI filling this need.
To do it justice in a review, we would have spent considerable
time running it on our regression tests for parts of
The Bytesmiths Toolkit. What
we'd like to do is save this topic for a future column, since tools
to support Smalltalk testing are rare, and deserve more attention
than we could devote right now.
The basic idea behind Code Coverage results is that tested
components of an application are hidden, and those that remain to be
tested are shown. Unfortunately, we seem to have misunderstood what
triggers the "watching" to record results. Instead of executing a
single method as the manual example indicates, we tried running the
Code Metrics after starting a "watch" on the top level
CcFramework application. We figured that certainly some
of the methods in the CtMeasure abstract class would be
exercised. We could not get the Code Coverage Browser to update its
status line message that 0% of the application was tested.
It may be that executing a "do it" is necessary to make the
watcher record which methods are being exercised. This suggests that
a testing tool that runs tests with the press of a button would need
some hooks into Code Coverage to yield the desired coverage results.
Code Publisher enables you to produce printed manuals in formats
such as LaTeX, MIF (FrameMaker Interchange Format), RTF (Microsoft's
Rich Text Format), and HTML and OTIML. We followed the guided tour in
the manual to produce an RTF format file for the API documentation of
one of the Code Publisher applications. The manual steps were easy to
follow. A file called "output.rtf" was generated. It would have been
nice to be able to name the output file.
Like most other software in the world, Code Publisher assumes that
the customer runs Microsoft Word. We don't, so we loaded the RTF file
into ClarisWorks to view the output. The output was readable, but
there were formatting problems with duplicated section headings
(e.g., SqaEtBrowserExtensionsSqaEtBrowserExtensions) and multiple
paragraphs where only one should be (due to carriage returns in the
class comment when it was originally entered). Some of these problems
may be because the Claris translator is not interpreting RTF
properly, or it may be because Code Publisher is using unspecified
features of RTF, but it points out the difficulties inherent in
export formats in general.
For EQA customers who don't have another document output tool,
Code Publisher will probably give you what you need, but we expected
more from the company who broke the "check-in, check-out" mold in
code management. Although it appears to be well crafted and is highly
customizable, Code Publisher is essentially a batch-oriented,
multi-format export facility that won't do anything to make the
documentation task easier, as we've
written about previously .
At the time of this review, we were undergoing an office move, and
so we were able to test EQA using only VisualWorks. Code Formatter is
only available under IBM Smalltalk, and so we were not able to try it
out. We hope to be able to report on it in a future column.
One of the nicer features of ED is the ease with which it is
extended. EQA continues that tradition with "an open and extensible
tools framework that lets you develop new QA tools easily." The ends
of the chapters on Code Critic and Code Metrics, for example, contain
a few paragraphs each on suggestions for adding one's own reviews or
Our approach to determining the ease of extending the tool was to
port one of the existing metrics in our Toolkit, a measure we call
"code thrash", to EQA. Since we were porting a
metric for which we already had the measure algorithm implemented, we
were able to get an idea of what was involved in these aspects of
extending the tool:
- determining where to put the new measure,
- implementing the new subclass for the measure and making sure
all of the required methods were present,
- seeing how much guidance is provided in the EQA manual and the
online documentation (which consists of class comments).
The metrics framework is based on one class per aspect to be
measured, with divisions between the types of code element to be
measured -- subapplication, class, or method. The first decision we
made was which framework class to subclass, which was simple because
our metric was already implemented at the app/subapp level.
Then we looked at the manual and the class comments for several of
the framework classes to be sure we were overriding the required
superclass methods. We ran into a glitch or two here because the
manual was too brief in its guidance, telling us to simply be sure to
implement all methods in the protocol called "override mandatory". If
we had paid more attention to the example in the manual instead of
the advice, we would have found that we needed to implement
isMetric , also.
There was no class initialization method to override, but the
measure subclasses have two state variables that indicate whether the
measure is enabled and what its default properties are. These class
instance variables must not be nil, so we determined that we needed
to execute "MyMeasureSubclass resetProperties" to set them to useful
values. (Perhaps Code Critic should say something about lazy
initialization and ease of subclassing.)
The entire effort of porting the code thrash metric from our
Toolkit to the EQA tool, to successfully running a report to exercise
the new measure, took about two hours. The new measure class consists
of 6 class methods and one instance method -- the one that does all
the work! We also extended SubApplication with one
What we learned from this first attempt at extending the metric
tool was that we could do a much more informative group of metrics by
concentrating at a different component level (for example, classes
and class fragments within a subapp) and that the raw number for the
metric needs a lot more experimentation and advice regarding upper
and lower thresholds than we could come up with in two hours. We also
saw that there are many aspects of the metrics part of EQA that we
did not have the opportunity to explore or exploit, such as traversal
classes and engine classes.
EQA is a carefully crafted suite of quality tools that should be
useful to any group that has found ED to be useful -- in other words,
any group doing team Smalltalk programming. However, it has some
rough edges and tacit assumptions that, while irritating, do not
diminish the value of the product.
The manual says it "is written for experienced users of
ENVY/Developer." There is no doubt that a good familiarity with the
ED browsers and an understanding of ED concepts is assumed in the EQA
suite. Each tool's chapter suggests how developers and managers might
use the tool in their work. We would add that there is a definite
time investment required to customize settings and make the various
reports applicable to your team's chosen coding guidelines. In
addition, beginning Smalltalkers should read the Smalltalk with
Style book and ask for their mentor's advice about the Critic
and Metrics results.
EQA is a deep product that can deliver immediate results to a
beginning user, but it will require a considerable investment in
understanding to use to its fullest potential.
1. In Crossing The Chasm, Geoffrey
Moore defines technology adoption in terms of market penetration.
Those who use a technology when it has less than 5% penetration are
"innovators," those who use it at 5% to 15% penetration are "early
adopters," and those who use it at 15% to 50% penetration are "early
2. We define "code thrash" as the ratio of
method editions to methods between any two app/subapp editions. This
is explained in more detail in
Exploiting Stability, The
Smalltalk Report , October 1995.
Go to our column in the
previous issue of The
Smalltalk Report, or to our column in the