So we have this little enum:
java.net.http.HttpClient.Version
public enum Version { HTTP_1_1, HTTP_2 }
and this little code snippet:
private static @NotNull String toStringHttpVersion(HttpClient.Version version) {
return switch (version) {
case HTTP_2 -> "HTTP/2";
case HTTP_1_1 -> "HTTP/1.1";
};
}
The linter-compiler train is absolutely content with the above, all cases covered, it’s beautiful.
This seems a recipe for future trouble at any point of deployment, right? What if Oracle decided to extend the enum with HTTP_2_1
or even to retro-fit HTTP_1_0
into it? Code installed all over the corporate system calling toStringHttpVersion()
will die horribly.
(Why HTTP_1_0
isn’t in the enum
in the first place is mysterious to me, feels broken, that enum shouldn’t exist, it should be an instance of a class called Semver or maybe a BigDecimal
, but I digress, and anyway java.net.HttpClient
throws a sheer IOException
if the HTTP status is “forbidden” which is beneath contempt, I will stop now.)
On really has to write this:
private static @NotNull String toStringHttpVersion(HttpClient.Version version) {
return switch (version) {
case HTTP_2 -> "HTTP/2";
case HTTP_1_1 -> "HTTP/1.1";
default -> version.toString(); // future proof
};
}
Which is flagged by the linter with “default branch is unnecessary” and may be missed by a developer anyway.
Am I missing something or does Java need a keyword futureextensible
to flag enums that are in danger of future extension, allowing the linter to warn about it?