I’m implementing a simple Quota
object which determines a usage percentage based on the maximum
and the used
.
private int maximum;
private int used;
public Quota(int used, int maximum) {
this.maximum = maximum;
this.used = used;
}
public double getUsagePercentage() {
return ((double) used / (double) maximum) * 100.0;
}
public int getMaximum() {
return maximum;
}
public void setMaximum(int maximum) {
this.maximum = maximum;
}
public int getUsed() {
return used;
}
public void setUsed(int used) {
this.used = used;
}
I realized that I could do it another way, though, and have the logic actually in the setter:
private int maximum;
private int used;
private double percentageUsed;
public Quota(int used, int maximum) {
this.maximum = maximum;
this.used = setUsed(used);
}
public int getMaximum() {
return maximum;
}
public void setMaximum(int maximum) {
this.maximum = maximum;
percentageUsed = ((double) used / (double) maximum) * 100.0;
}
public int getUsed() {
return used;
}
public void setUsed(int used) {
this.used = used;
percentageUsed = ((double) used / (double) maximum) * 100.0;
}
public double getUsagePercentage() {
return percentageUsed;
}
The second way seems much, much uglier to me, but maybe I’m just indoctrinated. Obviously I realize that the logic is copied between the two setters, but even if you extracted it into its own private
helper method, you get the point. I thought that perhaps this was the better approach so that the calculation of the percentage doesn’t have to happen every single time the getUsagePercentage()
method is called. If neither the maximum
nor the used
values have changed, it seems wasteful to continue performing an operation that we already know the answer to.
Is there any precedent for which to choose over the other? I like the first better just because I’ve been doing Java forever, but that’s not a solid argument for actually keeping it that way. What are the benefits and drawbacks of both? Or is one genuinely, objectively better than the other?
2
If you set the one value (maximum) and then update the next (used) (and even more often – think of a progress bar) you are doing the calculation twice and even worse, nobody asked for the values between the setXY-operations. Why spend time for things nobody actually requests. Thus it is perfectly fine to use the getter to calculate the real value whenever it is needed. As you found out, you are violating the DRY (don’t repeat yourself)-Principle. With more complex setters and complex calculations, your tradeoff will be even more worse. Your first example is fine.
One common guideline is that for any fact known by your system, there should be only one place where it is stored. In a database, this is known as normalisation; in object design the principle is sometimes referred to as “Single Point of Truth”, or the acronym SPOT. This is closely related to the principle “don’t repeat yourself” (DRY), and in fact in one formulation of DRY is actually identical:
“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”
In your example, there are two points of truth for the percentage – the variable that stores it directly, and the variables that hold the full integer values it is calculated from. This isn’t a particularly bad problem, because the two are within a single small class and can therefore be managed, but unless you have a good reason to do this, I’d avoid it.