To test that your program deletes the correct files, you run it first in a logging-only mode (by commenting-out delete lines or overriding your delete method to be a logging method).
for file_path in file_paths:
print('deleting', file_path)
# delete(file_path)
What can this kind of testing be called? I thought it was dry-run, but it looks like dry-run actually means to hand-follow the code with a tracetable.
5
I would call that a dry-run: a dry run is one that has no effect. Lots of programs have dry run modes that print out what they would do, so you can review it before committing to the operation.
it looks like dry-run actually means to hand-follow the code with a tracetable.
I have never seen that definition.
8
Dry-run as a term is definitely ok. Still I think this kind of testing – with manually commenting in and out certain lines – is not state-of-the-art anymore. The better alternative is to embed the code into a function where can control whether the deletion happens or not, i.e.:
def delete_files(file_paths,log_only=False)
for file_path in file_paths:
if log_only:
print('deleting', file_path) ' or log somewhere else
else:
delete(file_path)
One can call this function inside an automated test (with log_only set to True, and some extra code to validate only the correct files will be deleted). Such a test is something I would call a unit test with disabled file system access, or maybe a unit test in dry-run mode.
As mentioned by @chepner in a comment, a more streamlined approach is to provide the delete
function as a parameter:
def delete_files(file_paths,delete_func=delete)
for file_path in file_paths:
delete_func(file_path)
In unit tests, just call
delete_files(file_paths,lambda x:print('deleting', x))
or, even better, pass a function which collects the file names in a list, so it will be easy to check the content of the list afterwards.
(I did not test this code, hope I got the Python syntax right).
4
Since the OP and comments do mention some idea of “What consensus has the community arrived on?” I think its worth mentioning a couple of examples that I come across regularly.
--dry-run
is used by kubectl
(and followed by helm
) to give a report of what actions would be taken, without actually changing anything in the kubernetes cluster.
--check
is used by isort
and black
(formatters in the python ecosystem) to avoid reformatting your code, but still get a report (and error code) if any files are not compliant to the code standards. Useful in CI where the changes would not be persisted anyways.
The Pantsbuild
tool also uses --check
when running the pants tailor
command to autogenerate build metadata. Using --check
gives a report of what metadata would be generated, without actually making any changes. Again, useful in CI to cause a failure if something was missed.
In an effort to make this feel more like a good answer instead of just a random list: I do think there is some value in following community standards, roughly defined as they may be, so seeing how some fairly large/well-known projects [no data to cite, just my impression] handle this type of behaviour hopefully allows you to pick terminology in your own project that meets the expectations of your users (in this case, users probably means other developers on your project).
3
A term no-one else has mentioned yet is dummy run.
That can mean a run which doesn’t process any ‘real’ data (maybe because it’s not given any, or is given only dummy (test) data, or is configured not to process it).
It’s not really a technical term, but is used widely in general English; for example, Wiktionary defines it as “A trial or practice before the real attempt.”, which seems to apply well here. So it should be easily understood.
1
This sounds like Kent Beck’s Log String Pattern. BTW, I prefer your suggestion of overriding your delete method to be a logging method. Put it under control of a command line parameter, so you aren’t testing a slightly different build of the code.