I have read several articles and answers about the merits of the Acyclic Visitor pattern over the original Visitor pattern. To avoid ambiguity, I refer to the Acyclic Visitor as described in this article by Uncle Bob.
In essence, the acyclic visitor decouples the creation of new elements in the visited object hierarchy from the need to update all of the visitors accordingly. This allows independent expansion and deployment of the visitors and the elements.
However, how does one keep visitor correctness when implementing a new visited element?
Under the assumption that an addition of a new visited element would often necessitate changes in existing visitors to remain correct, that sounds to me like a horrible feature of the pattern. Whenever an element is added, someone has to manually remember to check and update every relevant visitor, instead of being forced to do so by the compiler. It essentially transforms what would be a compile time error in the original visitor pattern, into a software bug in the acyclic visitor pattern.
This goes directly against the promise of independently developable and deployable components.
Assuming work on a large project with many visitors and people working independently on visitors as features, the acyclic visitors will likely cause costly bugs that will only be revealed after deployment.
Are there any ways to mitigate this problem that are not manual? If not, why should the acyclic visitor pattern ever be used?
Basic Example
Let's suppose we have the following visited hierarchy:
public abstract class Base {
public abstract void accept(Visitor v);
}
public class A extends Base {
public void accept(Visitor v) {
if (v instanceof AVisitor) {
AVisitor av = (AVisitor) v;
av.visit(this);
}
}
}
public class B extends Base {
public void accept(Visitor v) {
if (v instanceof BVisitor) {
BVisitor bv = (BVisitor) v;
av.visit(this);
}
}
}
And the following visitor, which reports a list of all element types by order:
public interface Visitor {
}
public interface AVisitor {
void visit(A a);
}
public interface BVisitor {
void visit(B b);
}
public class ReportVisitor implements Visitor, AVisitor, BVisitor {
private StringBuffer report;
public ReportVisitor(StringBuffer report) {
this.report = report;
}
public void visit(A a) {
report.append("A\n");
}
public void visit(B b) {
report.append("B\n");
}
}
Now, suppose we add a new visited object:
public class C extends Base {
public void accept(Visitor v) {
if (v instanceof CVisitor) {
CVisitor cv = (AVisitor) v;
cv.visit(this);
}
}
}
public interface CVisitor {
void visit(C c);
}
The developer adding C
has to know ReportVisitor
and any other relevant visitor, and remember to modify them. Otherwise, the ReportVisitor
would simply omit all the occurrences of C
, which is a bug.
Note:
Another modified Visitor pattern achieves similar decoupling using the visitOther
method, as described in this answer.
By not filtering visitors in the visited object, this can arguably provide the benefit of allowing any visitor that knows (or, more accurately, speculates and future-proofs) it needs to handle all of the visited elements to throw an exception, thus reducing what would be a bug to a runtime error.
While this might be beneficial, as runtime errors are easier to notice, debug and solve than bugs, it is still far inferior to compile time errors.