Let’s say I have
procedure1() {
--body of first procedure--
}
Then I rename it into procedure2
and create a procedure1
above it:
procedure1() {
--body of second procedure--
}
procedure2() {
--body of first procedure--
}
Not once a line-based diff tool has highlighted the code from --body of second procedure
all the way to procedure2() {
as new code inside procedure1
.
This is bound to happen, since most diff tools are oblivious of the underlying structure of the source code. AST-based diff tools can’t work very well either, because of several reasons and I know what people really want is a semantic diff tool, but that’s not going to happen.
I haven’t seen a discussion though, about whether or not it would be practical to annotate the code in such a way that a line-based diff tool would understand the underlying structure of the source code.
For example, I could throw in some UUIDs in the code, like this:
//BeginBlock{E999A3BF-626E-428F-A2C1-6AFF0CD22BF2}
procedure1() {
--body of first procedure--
}
//EndBlock
And the modified code would look like this:
//BeginBlock{7C734F0A-92F4-45EB-B653-DBB9A0F18354}
procedure1() {
--body of second procedure--
}
//EndBlock
//BeginBlock{E999A3BF-626E-428F-A2C1-6AFF0CD22BF2}
procedure2() {
--body of first procedure--
}
//EndBlock
The point is to assign some tokens (unique within a file or project) to some marks that would reflect part of the structure of the source code.
An IDE could update those annotations automatically and they could help a diff tool better detect structural changes. The tool would scan the code once to identify the sections of the program (and how they’ve been moved around) and then compare those blocks having the same ID.
Do you think this approach is practical?
5
The Patience Diff algorithm is designed to address this, insofar as it is possible to do so with unannotated text. From that article:
[Patience Diff] only considers lines that are (a) common to both files, and (b) appear only once in each file. This means that most lines containing a single brace or a new line are ignored, but distinctive lines like a function declaration are retained. Computing the longest common subsequence of the unique elements of both documents leads to a skeleton of common points that almost definitely correspond to each other. The algorithm then sweeps up all contiguous blocks of common lines found in this way, and recurses on those parts that were left out, in the hopes that in this smaller context, some of the lines that were ignored earlier for being non-unique are found to be unique. Once this process is finished, we are left with a common subsequence that more closely corresponds to what humans would identify.
It is available in Git (as of 1.8.2) using a command-line flag:
git diff --patience
Or a configuration option:
git config diff.algorithm patience
This algorithm will still suffer when unique lines don’t offer enough information to reconstruct the history. In your example:
I rename it into
procedure2
and create aprocedure1
above it
The fact that you say and is already going to mess with any diff algorithm, because you’re making multiple semantic changes in one patch. The number of changes you make is the number of changes that a diff algorithm needs to infer that you made.
Tagging lexical blocks with extra metadata is not a real solution. You might as well get perfect versioning by adding an ID to every character in the document—it is an improvement, but doesn’t scale. I would much rather see an editor automatically stash, commit, and unstash around an automated refactoring such as a rename. Then at least I know that the changes are semantically isolated, and a post facto diff algorithm will have an easier time of it.
Well, in the first place I could argue that creating a new procedure1 that does what procedure2 did, with the same signature, is begging for trouble.
Even were procedure1 to be the most natural name for the new procedure, you ought to differentiate it at the name level, to prevent confusion down the line.
On a change of this import, the procedure1
name ought to no longer exist, thereby “informing” all dependant sources that something has changed that they need to react to.
Then, you already have something that can take up the role of an UUID — function documentation comments.
I have tried modifying a simple file with rudimentary pseudo-Javadoc comments in the way you indicated.
The diff
tool in line mode correctly indicates that a new procedure has been added, and the old one has had its name changed.
Granted, the final closing brace has been misrepresented, and this could be avoided by adding a serial number or date stamp to the closing brace (i.e. } // 20130705
), but this seems more trouble than it’s worth.
/** /**
* Procedure 1, doing things. * Procedure 1, doing things.
* *
*/ */
procedure2() { | procedure1() {
// Do operation 1 // Do operation 1
// Do operation 2 // Do operation 2
} <
<
/** <
* Procedure 2, the new one. <
* <
*/ <
<
procedure1() { <
// Do operation 3 <
// Do operation 4 <
} }
procedureFinal() { procedureFinal() {
// Finalize. // Finalize.
} }
If you are depending on the order and structure of the text file containing the source to code help you manage your change sets, you’re just never gonna get it quite right. A better diff tool may help you, some.
Some ideas.
-
By radically changing the intent and names of functions, you are basically demanding that the whole subset of source, a least in this file, be reviewed again for consistency,logical errors, integration boo boos, etc. You just can’t change the intent of the code,and expect a line or character based automated diff tool to try and infer the equivalent changes in meaning.
-
If you find this happening a lot, you have two options as I see. You can either rely less on automated diff tools and once a file is changed beyond comparability, you simply must review it as new. Read the functions. Figure out whether they work. It’s tedious. But you’ve changed the meaning. So there are no shortcuts to ensuring correctness. So….furthermore,if you want to have more Automated QA, you should split these functions into two files, based on their semantics. The next time you change things. It’s a lot of files sometimes, but what you doing, is more clearly defining in textual or filing structure, what each thing does instead of relying on uniqueness of function names or relative positioning within a single file to define the intended semantic structure of the functions that comprise your project.
So,split things into files.
If procedure2 is now doing the job of procedure1 , that’s a big change in intent.
The only way to propagate this cleanly is to actually program to an interface. ANd effective use of smaller files should facilitate this.
I also note, look at something like the GAC manifest binding in .Net. How it tries to enable library consumers to specify that they need a 1.x version ,but that it also enables,producers of libraries to specify,that this 2.1 versions is function for function backward compatible with all 1.x requirements. This is kinda what you are after with the GUID idea. A stable interface.