In the early 1960's, decision tables enjoyed much the same place in programming
folk-lore as is now enjoyed by structured programming. Some amazing success
stories were recorded. But for most people, they failed to deliver the goods.
Perhaps this may be attributed to the performance of the early automatic
pre-processors. It may also be argued the DP community was not yet ready for
the technology. Cope is a recently developed Cobol pre-processor that is
fully compatible with structured programming techniques. It combines clarity
of expression with efficient object code. The paper describes some interesting
applications and experiences.
Programming Methodology, Business Applications, Cobol, Pre-processors, Decision
Tables, Structured Programming, Program Maintenance, Optimisation,
4.0 (Documentation) , 4.12, 4.22, 4.
Interest in decision tables blossomed in the early 1960's. Systems that could
automatically convert decision tables to computer programs have been available
since 1962 (Pollack 1962; General Electric Company, 1962). Typically, these
systems were reasonably efficient, although modern processors generate better
By the mid-60's decision tables were attracting attention as excellent means to
improve documentation and programmer productivity. Some success stories were
so sensational that it seemed decision tables must one day replace conventional
programming (Fisher, 1966)
In spite of these early successes, only a few determined programming shops use
decision tables today. Those that do, usually do it because of management
edict, rather than because the idea appeals to the programmers.
The documentation aspect of decision tables is still considered valuable,
although there must be reasons why tables are not easy to use. In the author's
opinion, what is basically a sound idea has not been properly packaged. More
attention should have been paid to human factors, especially those that
determine how much time the programmer will need to prepare his program and
Based on 10 years experience of Cobol decision table processors, here are some
problems that I have encountered:
Convinced of the underlying virtues of the decision table technique, we
designed Cope to avoid these drawbacks (Dwyer and Hutchings 1977). The
processor was written as a Cobol program, so that it is as portable as the
programs it processes. The format of a table is that of a NOTE paragraph or a
series of comment lines. This allows tables to be reproduced in the generated
Cobol program, eliminating the need for additional listings entirely. The
syntax is compatible with Cobol practice, and has the same continuation rules
and punctuation rules. Table layout is free-format, but naturally it is a good
idea to keep the columns of entries vertically aligned. Particular attention
has been paid to the quality and layout of the generated code.
- The decision tables may have a different format from the remainder of the
source program, making it difficult to prepare the program as a single text.
- The programmer may have to memorise special rules concerning the maximum
length of sentences, continuation of lines, etc., not compatible with Cobol.
- During debugging, the programmer must refer to a Cobol program listing, his
input data and his test results. He may also need additional pre-processor
input and output listings, so that bugs in the Cobol program can be traced back
to erroneous tables. That is too many to fit on his desk!
- When the pre-processor detects errors, the compile step is often skipped.
So Cobol syntax checking has to wait until all table errors have been fixed,
needing extra turn-arounds.
- The Cobol user is interested in machine independence, but his preprocessor
may be available only for one particular machine.
- It may not be compatible with other pre-processors (e.g. for DBMS) or with
the use of program libraries, or support non-standard Cobol features
- Problem solutions may become fragmented because they can only be expressed
as a series of tables linked by GO TO's.
- Another difficulty of a potentially more serious kind, is that the
programmer may see that the code that the processor generates is inferior to
what he could write by hand. He may have no way to improve it. He therefore
becomes dissatisfied with his own work. It is then a small step to hand
translation of the decision tables, and eventually to abandoning them
The programmer is given more than usual responsibility for the code that is
generated. This has meant ignoring a large body of literature on decision tree
optimisation (e.g. Schumacher and Sevcik 1976; Myers 1972). The justification
is that optimisation can change the logical order of the conditions given by
the programmer. Most published methods assume that such changes will not
influence the program logic. Although usually correct, the assumption is not
always justified. Rather than burden the programmer with explaining the
constraints to the processor, he chooses what he believes to be the optimum
order. He may sometimes be wrong, but it is a responsibility he has always
taken with other coding techniques. Cope restricts its own optimisation to
what can be done without altering the logical order given by the programmer.
Extended-entry tables receive a new treatment. The advantage of extended entry
is conciseness, but writing the extended entries in the table itself makes it
hard to fit a useful number of rules onto a coding sheet. So Cope uses
multi-choice notation, where the table can contain numbers. The numbers can be
used to select parameters that modify a skeleton or "model" statement. The
parameters are placed in a list following the model statement, clear of the
table area. This makes it easier to write the table, but rather harder to read
it. There are three different substitution mechanisms: denoted by paired
periods, commas, or semi-colons.
Users' reactions to multi-choice notation go through three stages. First a
tendency to avoid it. Second, to use it on every possible occasion. Third, to
use it intelligently to improve documentation. The only difficulty experienced
has been understanding where parameters will be substituted when the "model"
statement calls for more than one parameter. The rule used is not the most
obvious one, but the one that saves most writing.
Program structures that involve loops are accommodated by special conditions
and actions. These special statements permit arbitrary logic structures. A
simple extension to this feature is provided to ease writing test data
Cope thus has a handful of new syntax features that make it much more
flexible. Experience has shown that they are at least as valuable as its
ability to process decision tables.
The Management Services group at the South Australian Institute of Technology
uses Cope exclusively for all Cobol programming, and has written over 200
tables. Each table usually forms a Cobol section and is a closed sub-routine.
Over 90% of the tables make use of one or more special features of the Cope
The first Cope project was the Cope processor itself, which was written as a
series of tables. The original version was translated to Cobol by hand. Later
versions have each been bootstrapped from the preceding version.
On this experience, I estimate that Cope produces 10% to 20% less code than
hand translation. Although it is sometimes possible for a programmer to spot
where the code Cope generates could be improved, it is also true that he rarely
has the time to be as thorough as a machine in making trivial improvements.
The net gain is 10% to 20% in favour of the machine. Cope and the programmer
work synergistically. The machine takes care of the details; the programmer
concentrates on the larger design.
I have been asked several times, "What kind of problem do you find decision
tables solve?". Am I being paranoid, or is the questioner suggesting that I
must have some very peculiar problems? Obviously he can't think of a useful
Our Management Services group has very ordinary problems. They are shared by
most of the DP community.
- We don't want to write more than we need, especially if the code is
- We want to be able to trace bugs quickly.
- We want documentation that helps program maintenance.
- We like structured programming, but we don't want to use extra memory for
- We don't want to be bothered by trivial coding errors.
To a greater or lesser extent, Cope helps with all these problems. I will show
how in the following examples.
Cope cuts procedure division coding by more than a half. It is not easy to give
exact figures, Our group does not have the resources for a controlled
experiment. But it is easy to compare the number of characters in tables fed
to Cope with the Cobol they generated. For a typical program the ratio is
about 55%. If Cope did not optimise the generated program so well, this ratio
would be more impressive. Paradoxically, the better Cope works the less it
saves. Remembering that it generates less code than hand translation, it is
fair to claim Cope cuts writing over 50%. This is in comparison with the
optinised code that Cope produces. The same logic written as a structured
program is usually more verbose. Compared with structured programming,
Cope programming may use only one third as many characters.
The saving comes about in three ways. One is that Cope creates all necessary
procedure-names and GO TO's. The second is that when logic requires the same
statement to appear more than once in the program, it need only be written once
in a table. The third is the most significant. When a program requires several
similar, but not identical statements, Cope's multi-choice capability can be
used to avoid repetition.
Figure 1 illustrates this last point. The table has 11 rules, each consisting
of one condition and one action. Each rule is of the form "If the command is
one of these codes, display the following error message". The particular codes
and the variable part of the error message are selected by the numbers in the
table. The first rule selects the first set of codes and the first message,
and so on.
To read the rules it is necessary to mentally substitute the appropriate
parameter from the list where the model contains paired periods (..). The
first rule in the table, containing two l's, reads as follows:
IS TYPE-OF-ITEM IN TRANSACTION = "DB" OR "CR"?
DISPLAY TYPE-OF-ITEM IN TRANSACTION "LEDGER AMOUNT (ORDER-NO)."
this example each of the parameters is enclosed in parentheses. This is only
necessary when a parameter contains a delimiter, such as a space. As shown by
this example, the outer parentheses are ignored.
*DISPLAY-FORMAT-1. NOTE TABLE TO DISPLAY CORRECT COMMAND FORMAT.
* 1 2 3 4 5 6 7 8 9 10 11 IS TYPE-OF-ITEM IN TRANSACTION = ...
* ("DB" OR "CR")
* ("IN" OR "CN" OR "UP" OR "0P")
* ("VO" OR "RV")
* ("JE" OR "RJ")
* ("CC" OR "CH")
* ("C1" OR "C2")
* ("R1" OR "R2" OR "R3" OR "R4")
* 1 2 3 4 5 6 7 8 9 10 11 DISPLAY TYPE-OF-ITEM IN
* TRANSACTION "..." UPON TELETYPE.
* (LEDGER AMOUNT (ORDER-NO))
* (CHEQUE-NO AMOUNT DATE)
Figure 1: Selecting one of many similar alternatives.
By hand, it would have been necessary to write ten IF statements and
eleven DISPLAY statements. Unless the Cobol compiler could accept IF
statements nested 10 deep, some GO TO's and paragraph-names would be needed
Figure 2 shows the use of paired commas (,,) to generate a series of similar
statements within the same rule. The table corresponds to a total of 8 MOVE
instructions. Decision table logic is not used at all. The table has no
conditions and only one rule.
The table is read as:
MOVE SPACES TO SUPPLIER-NAME OF BUFFER.
MOVE SPACES TO SUPPLIER-CODE OF BUFFER.
*CLEAR-LINE-1. NOTE TABLE TO CLEAR BUFFER CONTENTS.
* 1 MOVE SPACES TO .. OF BUFFER.
* SUPPLIER-NAME. ,, SUPPLIER-CODE.
* ,, DETAILS. ,, SERIAL-NO.
* ,, LEDGER. ,, ORDER-NO.
* 1 MOVE ZERO TO .. OF BUFFER.
* EXPENSE. ,, PAYABLE.
Figure 2: Generating several similar statements.
There are occasions when the two-dimensional layout of a table is just what is
needed to express the problem. Figure 3 shows this nicely. This table checks
that commands read from a teletype appear in their proper context. There are 8
possible contexts and 7 classes of command. The matrix in the top left corner
displays every possible combination. The allowable combinations are marked by
a "1" and the illegal combinations by a hyphen. Columns correspond to values
of the context and the rows to the commands. (The ninth rule is an "else"
The semicolons in the condition,
IS CONTEXT = ;;.
that the number in the table is itself to be substituted, e.g.
IS CONTEXT = 1.
an illegal combination is detected, an error message is displayed. This is
different for each context. It is easy to check that the texts of the error
messages agree with the matrix.
*COMMAND-INITIALIZE-1. NOTE TABLE TO CHECK CONTEXT OF COMMAND.
* 1 1 1 1 1 1 - 1 - IS NOT (TYPE-OF-ITEM IN TRANSACTION =
* ..). "RV".
* 1 - - - - 1 1 1 - ,, "RJ".
* 1 - - - - 1 - 1 - ,, ("VO" OR "JE" OR "FI").
* - - - - 1 1 1 1 - ,, ("DB" OR "CR")
* - - 1 1 1 - - - - ,, ("IN" OR "CN" OR "UP" OR "OP" OR
* "PR" OR "CC" OR "CH").
* - 1 - - - - - - - ,, ("SC" OR "SU").
* - - 1 - - - - - - ,, ("C1" OR "C2" OR "R1" OR "R2" OR
* "R3" OR "R4").
* 1 2 3 4 5 6 7 8 - IS CONTEXT = ;;.
* 1 2 3 4 5 6 7 8 - MOVE "..." TO WARNING.
* (VO JE RV RJ Fl)
* (SC SU RV)
* (C1-C2 R1-R4 IN CN UP OP PR CC CH RV)
* (IN CN UP OP PR CC CH RV)
* (IN CN UP OP PR CC CH DB CR RV)
* (DB CR VO JE RV RJ Fl)
* (DB CR RJ)
* (DB CR JE VO RJ RV Fl).
* 1 1 1 1 1 1 1 1 - DISPLAY .. UPON TELETYPE. IN-LINE.
* ,, IMAGE-LINE.
* ,, ("CHOOSE FROM - " WARNING).
* X X X X X X X X - MOVE TRUE TO SERIOUS-ERROR.
* 1 1 1 1 1 1 1 1 2 MOVE .. TO CONTEXT-ERROR. TRUE FALSE.
Figure 3: A problem best expressed as a matrix.
A different advantage of a two-dimensional layout is illustrated by Figure 4.
One of the actions of this table is to move one of four different values to
IMAGE-CHAR (IMAGE-INDEX). During program debugging, when the above variable
displayed unexpected values, it was easy to trace the origin of the particular
*TRANSFER-OPERAND-1. NOTE TABLE TO CHECK CHARACTER SET.
* Y Y N N N N N N N N N N IS SYMBOL-INDEX MAX-LEN.
* 1 - 1 2 2 3 4 4 5 5 - - IS IN-CHAR (IN-INDEX)
* (= SPACE)
* (= "-")
* ( = ".")
* - - - 1 - - 2 - 3 - 4 - IS CHAR-SET ... (> 1) (> 2)
* (= ZERO OR > 3) (> 3).
* - 1 2 2 3 2 2 3 4 3 2 3 MOVE '..' TO IMAGE-CHAR
* (IMAGE-INDEX). < - * (.).
* - X - - X - - X - X - X MOVE TRUE TO SERIOUS-ERROR.
* - - X X X X X X X X X X SET SYMBOL-LEN TO SYMBOL-INDEX;
* MOVE IN-CHAR (IN-INDEX) TO
* SYMBOL-CHAR (SYMBOL-INDEX);
* SET SYMBOL-INDEX UP BY 1.
Figure 4: How tabular layout helps debugging.
For example, if the value displayed was an "*", the 3rd alternative, the
table entry must be 3. This means that rule 5, rule 8, rule 10 or rule 12,
must have been used. Hence it is possible to narrow down the cause to four
possible combinations of conditions.
To achieve this same deduction without the table is much harder. The
programmer must first scan for relevant MOVE instructions in a considerably
longer text, making sure to find each occurrence. He must then work backwards
towards the subroutine entry-point to establish the conditions that must have
applied. If he finds a paragraph-name in the process, he must scan the text
for any relevant GO TO's and trace backwards from them as well. If the coding
has been cleverly optimised, he is likely to encounter further paragraph-names,
and so on. This activity surely accounts for most of the time programmers spend
Logic tracing with a table is comparatively simple, and the time to locate
faults is greatly reduced.
To illustrate the point further let me cite the case of the bug in Cope. The
exact details of the bug need not concern us, except that it was obscure enough
to have remained undetected for a year of operation. After such a time, I
certainly did not remember the details of the program well enough to know even
which table was in error.
Once the exact cause of the bug had been defined, it was easy to find exactly
which rule applied when the error occurred. Correcting the bug required the
addition of a new row and the alteration of a couple of entries. Debugging and
correction took less than half a day.
Most importantly, correcting the bug did not, and could not, introduce any new
errors. It is common experience that fixing one bug often causes another.
This is because optimisation often dictates that the same code should be shared
by different logic paths. When one path is corrected, the other paths change
too. When one rule in a table is changed, no other rule is affected, even if
it previously shared common code.
The only documentation used in making this correction was the program listing,
which includes the decision tables as comments.
This is the only error that has been found in Cope since it was declared
operational. Cope contains 72 decision tables, and roughly one thousand rules.
Within the Management Services group we have a number of different approaches
to solving programming problems, and occasionally a program starts life as a
flow-chart. The table in Figure 6 originated this way, from the flow-chart of
Figure 5: Checking some structured
When a flowchart represents a case statement, writing its decision table
is trivial. Each case becomes a rule.
More complex flowcharts must be broken into a series of case statements. This
is done by labelling the flowchart with a series of unique numbers. (These
numbers are shown in Figure 5). Each flowpath between the numbered labels
can be tabulated, provided that the starting and finishing label numbers are
also recorded. These numbers appear in the GROUP and NEXT rows, the first and
last rows of Figure 6.
*READ-PAYMENTS-1. NOTE TABLE TO READ AND MATCH OLD PAYMENT RECORDS.
* 1 2 2 3 4 4 5 5 5 5 6 6 GROUP.
* - Y N - - - - - - - - - IS THIS-KEY = HIGH-VALUE.
* - - - -1 - 1 1 1 - 2 - IS ..-KEY = THIS-KEY. OLD REF.
* - - - - - - 1 2 - - - - IS TYPE-OF-ITEM IN OLD = "..".
* SU SC.
* 1 - - - - - - - - - - - MOVE .. TO DATA-STATUS;
* PERFORM SCHEDULE. START-FILE.
* - - - 1 - - - - - - - - ,, START-SUPPLIER.
* - - - - 1 - - - - - - - ,, START-VOUCHER.
* - - - - 1 - - - - - - - ,, PROCESS-HEADER.
* - - - - - - - - 1 - - - ,, PROCESS-OLD.
* - - - - - - 1 1 - 1 - - ,, END-VOUCHER.
* - - - - - 1 1 - - 1 - - ,, END-SUPPLIER.
* - 1 - - - - - - - - - - ,, END-FILE.
* 1 2 - - - - - - - - - - .. OLD-FILE REF-FILE.
* (OPEN INPUT) CLOSE.
* X - - - X - - - X - - - READ OLD-FILE AT END
* MOVE HIGH-VALUE TO OLD-KEY.
* X - - - X - - - X - - - IF OLD-KEY NOT = HIGH-VALUE
* MOVE CORR OLD TO OLD-KEY.
* X - - - - - - - - - X - READ REF-FILE AT END
* MOVE HIGH-VALUE TO REF-KEY.
* X - - - - - - - - - X - IF REF-KEY NOT = HIGH-VALUE
* MOVE CORR REF TO REF-KEY.
* X - - - - - - - - - X X IF OLD-KEY < REF-KEY THEN
* MOVE OLD-KEY TO THIS-KEY ELSE
* MOVE REF-KEY TO THIS-KEY.
* 2 - 3 4 5 6 3 4 5 6 2 2 NEXT.
Figure 6: The flowchart expressed as a table.
The labelling process is somewhat arbitrary. The main requirement is
that at least one label must appear in every loop. In Figure 5 six labels were
adequate. Once labelling has been done, it is easy to write the table.
The technique was used by a government department to convert a complex Fortran
program to Cobol. Because flowcharts were available, the conversion was not
difficult. Testing was harder, because no one was familiar with the problem
the program was meant to solve.
The method the department used for testing is interesting. An extra
multi-choice row was temporarily added to each table. Their effect was to
record how many times each rule in each table was used when a test file was
processed. Those with a count of zero would need special test data to be
devised. The test output could be compared with results produced by the
The original test data failed to exercise a surprising number of rules. So
special test cases were supplied to check them. In some cases these
rarely-used rules did not perform correctly, revealing that the Fortran program
still contained bugs after years of operational use. In other cases it was
impossible to exercise the rules at all, because they were logically
This same rule count measurement was used to develop the Cope processor, but
for a different purpose. By noting which rules were executed most often, it
was possible to find the critical areas that affected program performance.
This way, we cut execution time 40%. Before using this technique, our efforts
had been misdirected, and we had made only marginal gains.
The three basic constructions of structured programming are readily translated
to tabular format. Figure 7 is a sequence of two blocks. The first block is
GROUP 1, consisting of the first rule, the second is GROUP 2, comprising the
remaining three rules.
*CALCULATE-NEW-SIZE-1. NOTE TABLE TO SET RANDOM FILE LIMIT.
* 1 2 2 2 GROUP.
* - Y N N IS HASH IN KEY-AREA = ZERO.
* - - N Y IS QUOTIENT IN KEY-AREA < DIVISOR OF
* X - - - DIVIDE REC-COUNT OF KEY-AREA
* BY PACKING-DENSITY OF KEY-AREA
* GIVING SPREAD OF KEY-AREA.
* - X - - ADD 1 TO SPREAD OF KEY-AREA.
* 2 2 1 - .. TO DIVISOR OF KEY-AREA.
* (ADD 1) (MOVE 2).
* X X X - DIVIDE SPREAD OF KEY-AREA
* BY DIVISOR OF KEY-AREA
* GIVING QUOTIENT IN KEY-AREA
* REMAINDER HASH OF KEY-AREA.
* 2 2 2 - NEXT.
Figure 7: A while loop containing an if...else.
The first block, with only one rule, is a compound statement. The
second block, with 3 rules, is more complex. The last rule is the exit
condition, because the NEXT entry is blank ("-") . Since there are two other
rules in GROUP 2 both having NEXT entries of 2, GROUP 2 must be a while
.. do construct containing an if .. else construct.
This example illustrates not only that it is possible to express any structured
program in tabular form, but that the structure is not concealed. It is made
more explicit to the practised eye, because it is necessary to look only at two
lines of text, the GROUP and NEXT rows, to understand the basic structure.
From Figure 7, it is easy to spot that the last two actions in rule 1, which is
outside the while loop, also appear in rule 2, which is inside the loop.
This might encourage us, coding by hand, to write some tricky code, so that
these instructions are not duplicated.
Using Cope, we can both have the structured cake, and eat it. The form of the
table preserves the while .. do structure, but Cope's optimiser will produce
the tricky code that eliminates the duplicated instructions. The generated
program is shown In Figure 8.
DIVIDE REC-COUNT OF KEY-AREA BY PACKING-DENSITY
OF KEY-AREA GIVING SPREAD OF KEY-AREA .
MOVE 2 TO DIVISOR OF KEY-AREA. CALCULATE-NEW-SIZE-1-1716.
DIVIDE SPREAD OF KEY-AREA BY DIVISOR OF KEY-AREA
GIVING QUOTIENT IN KEY-AREA REMAINDER HASH OF KEY-AREA.
IF HASH IN KEY-AREA = ZERO; GO TO CALCULATE-NEW-SIZE-1-1517.
IF QUOTIENT IN KEY-AREA < DIVISOR OF KEY-AREA
GO TO CALCULATE-NEW-SIZE-1-EXIT.
ADD 1 TO DIVISOR OF KEY-AREA .
GO TO CALCULATE-NEW-SIZE-1-1716.
ADD 1 TO SPREAD OF KEY-AREA .
GO TO CALCULATE-NEW-SIZE-1-1616.
Figure 8: Generated code is optimised for size and speed.
It would be wonderful if Cope could eliminate all the trivial errors in my
programs. However, I still occasionally forget to initialise a variable to
zero, or I write "<" when I mean ">". And I now have the opportunity to
substitute "N" for "Y".
Even so, I feel sure that I make fewer stupid errors than before I used Cope,
and the other members of the Management Services group feel the same way.
There are a number of good reasons why this should be so.
In Figure 3 the programmer had to help Cope. When it was first written, the
test for CONTEXT was the first row in the table. But the programmer reasoned
that if the table began with a 9-way branch, a separate test would have to be
made wherever the matrix at top left had a "1" entry. By deferring the context
check, the matrix would be evaluated as a decision tree.
Tables are very easy to check at "walk-thru's". The relevant cases are all
documented ready for discussion. It is so easy to see the tables are right,
that we don't have walk-thru's too often.
- We no longer need to write a GO TO or nest an IF. So there are no GO
TO's going to the wrong place, and we have no trouble matching our IF's with
our ELSE's. Cope does these things for us, and is not creative enough to get
- That most common coding mistake, the missing period, no longer causes subtle
logic errors. It is always picked up as a syntax error.
- Most importantly, there is no longer any need to be tricky. If optimisation
can take place, Cope will do it and we needn't worry about it. If we do want
to worry about it, we can sometimes adjust the order of the rows within a
table, to help Cope's optimiser.
The saving is substantial. 33 condition tests were reduced to 19. The code
generated by the original arrangement was easy to understand but inefficient.
The efficient arrangement gives code that is much harder to understand, and by
hand it would be impossible to maintain. Maintaining the table is just as easy
in either form.
We have encountered four problems using Cope. It is only fair to include them
in what has otherwise been a one-sided discussion.
Dwyer, B. and Hutchings, K. (1977) "Flowchart Optimisation in Cope, a
Multi-Choice Decision Table" Aust. Comp. J. Vol. 9 No. 3
p. 92 (Sep. 1977).
- It takes rather longer for Cope to process the tables than it takes the
compiler to compile the Cobol it generates. There are a number of factors
involved here. For example, if Cope did not optimise the Cobol program, Cope
would run faster and the compiler would take longer, compiling the extra
statements. Our particular compiler is very fast, but does not produce good
object code. Hence Cope, which is a Cobol program, is not as efficient as it
might be on a different machine. But it is debatable whether Cope has
increased the computer time we use, because we do not need as many runs to get
- It is sometimes cumbersome to amend a table using the text editor or
library update utility, especially if a new rule is to be added. We get around
this problem by always leaving room for additional rules, and by writing tables
small enough to be regarded as expendable.
One must resist the temptation to make the tables too complex. They are an
alternative to flowcharts, not an alternative to proper program design. In the
early days of decision tables, their ability to handle complex logical
combinations encouraged many users to bypass important design decisions. It
was not a formula for success then, and it is not a formula for success now. A
better guide to success, is to use an output-directed approach (Grindley 1968),
combined with structured program design (Jackson 1975).
- A programmer may not be able to get the best use from Cope if someone
else designed the module structure. Cope works best when similar conditions or
actions are grouped into the same table. Arranging this sometimes leads to
unusual designs, but always to improved documentation. For example, the
DISPLAY statements in Figure 1 could have been embedded at various points in
other modules, and Figure 1 need not have existed. Because the programmer
could foresee saving some writing by grouping the DISPLAYs together, he has
neatly documented all the command formats in one place.
- It has not proved easy to persuade experienced programmers to give
Cope a try. They are too busy cutting down trees to find time to sharpen the
It takes time to become as fluent with tables as with the more familiar
flowcharts or structured programs. Nor is any benefit necessarily felt during
coding. Satisfaction comes when the program is complete and debugging begins.
It is then obvious that the program that has been produced is of very high
quality. Bugs are rare, and documentation is excellent.
It took a year for one member of our team to get around to using Cope. (He was
always too busy--looking for bugs) . A month later he cheerfully remarked, "I
wouldn't want to go back to Cobol again -- you should have called it `Dope' --
Fisher, D.L. (1966) "Data, Documentation and Decision Tables" Comm
ACM Vol. 9 No. 1 (Jan. 1966) p.26-31.
General Electric Company (1962) GE-225 TABSOL reference manual and
GF-224 TABSOL application manual CPB-l47B (June 1962).
Grindley, C.B.B. (1968) "The Use of Decision Tables within Systematics"
Comp. J. Vol. 11 No. 2 p. 128 (Aug. 1968).
Jackson, M.A. (1975) Principles of Program Design Academic
Myers, H.J. (1972) "Compiling Optimised Code from Decision Tables"
IBM J. Res. & Development (Sept. 1972) p. 489-503.
Pollack, S.L. (1962) "DETAB-X: An improved business-oriented computer
language" Rand Corp. Memo RM-3273-PR (August 1962)
Schumacher, H. and Sevcik, K.C. (1976) "The Synthetic Approach to
Decision Table Conversion" Comm. ACM Vol. 19 No. 6
(June 1976) p.343-351
This page has been accessed
times since 12th June 1997.
Up to Barry Dwyer's Home Page