How can I write functions that are reusable without sacrificing performance? I am repeatedly coming up against the situation where I want to write a function in a way that makes it reusable (e.g. it makes no assumptions about the data environment) but knowing the overall flow of the program I know it isn’t the most efficent method. For example if I want to write a function that validates a stock code but is reusable I can’t just assume that the recordset is open. However if I open and close the recordset everytime the function is called then the performance hit when looping through thousands of rows could be huge.
So for performance I might have:
Function IsValidStockRef(strStockRef, rstStockRecords)
rstStockRecords.Find ("stockref='" & strStockRef & "'")
IsValidStockRef = Not rstStockRecords.EOF
End Function
But for reusability I would need something like the following:
Function IsValidStockRef(strStockRef)
Dim rstStockRecords As ADODB.Recordset
Set rstStockRecords = New ADODB.Recordset
rstStockRecords.Open strTable, gconnADO
rstStockRecords.Find ("stockref='" & strStockRef & "'")
IsValidStockRef = Not rstStockRecords.EOF
rstStockRecords.Close
Set rstStockRecords = Nothing
End Function
I’m concerned that the impact on performance of opening and closing that recordset when called from within a loop over thousands of rows/records would be severe but using the first method makes the function less reusable.
What should I do?
1
You should do whatever yields the greater business value in this situation.
Writing software is always a trade-off. Almost never are all valid goals (maintainability, performance, clarity, conciseness, security etc. etc.) completely aligned. Do not fall into the trap of those short-sighted people who consider one of these dimensions as paramount and tell you to sacrifice everything to it.
Instead, understand what risks and what benefits each alternative offers, quantize them and go with the one that maximizes the outcome. (You don’t have to actually make numerical estimates, of course. It’s enough to weigh factors such as “using this class means locking us into that hash algorithm, but since we’re not using it to guard against malicious attack, only for convenience, this one is good enough that we can just ignore the 1:1,000,000,000 chance of an accidental collision”.)
The most important thing is to remember that they are trade-offs; no single principle justifies everything to satisfy, and no decision, once taken, need stand eternally. You may always have to revise in hindsight when circumstances change in a way you didn’t foresee. That’s a pain, but not nearly as painful as making the same decision without hindsight.
1
Neither of these seems more reusable than the other. They just seem to be at different abstraction levels. The first is for calling code that understands the stock system intimately enough to know that validating a stock reference means looking through a Recordset
with some kind of query. The second is for calling code that just wants to know whether a stock code is valid or not and has no interest concerning itself with how you’d verify such a thing.
But as with most abstractions, this one is “leaky”. In this case the abstraction leaks through its performance- the calling code can’t completely ignore how validation is implemented because if it did, it might call into that function thousands of times as you described and seriously degrade overall performance.
Ultimately, if you have to choose between poorly abstracted code and unacceptable performance, you have to choose the poorly abstracted code. But first, you should look for a better solution- a compromise that maintains acceptable performance and presents a decent (if not ideal) abstraction. Unfortunately I don’t know VBA very well, but in an OO language, my first thought would be to give calling code a class with methods like:
BeginValidation()
IsValidStockRef(strStockRef)
EndValidation()
Here your Begin...
and End...
methods do the one-time lifecycle management of the record set, the IsValidStockRef
matches your first version, but uses this record set which the class itself has taken responsibility for, rather than having it passed in. Calling code would then call the Begin...
and End...
methods outside of the loop, and the validation method inside.
Note: This is only a very rough illustrative example, and might be considered a first-pass at refactoring. The names could probably use tweaking, and depending on the language there should be a more clean or idiomatic way to do it (C# for example could use the constructor to begin and Dispose()
to end). Ideally code that just wants to check if a stock ref is valid shouldn’t itself have to do any lifecycle management at all.
This represents a slight degredation to the abstraction we’re presenting: now calling code needs to know enough about validation to understand that it’s something which requires some kind of setup and teardown. But in return for that relatively modest compromise, we now have methods that can be used easily by calling code, without hurting our performance.
5
For a long time, I used to implement a complicated system of checks to be able to use database transactions. The transaction logic goes as follows: open a transaction, perform the database operations, rollback upon error or commit on success. The complication comes from what happens when you want an additional operation to be performed within the same transaction. You’d either need to write a second method entirely which does both operations, or you could call your original method from a second, opening a transaction only if one hasn’t already been opened and committing/rolling back changes only if you were the one to open the transaction.
For example:
public void method1() {
boolean selfOpened = false;
if(!transaction.isOpen()) {
selfOpened = true;
transaction.open();
}
try {
performDbOperations();
method2();
if(selfOpened)
transaction.commit();
} catch (SQLException e) {
if(selfOpened)
transaction.rollback();
throw e;
}
}
public void method2() {
boolean selfOpened = false;
if(!transaction.isOpen()) {
selfOpened = true;
transaction.open();
}
try {
performMoreDbOperations();
if(selfOpened)
transaction.commit();
} catch (SQLException e) {
if(selfOpened)
transaction.rollback();
throw e;
}
}
Please note, I’m not advocating the above code by any means. This should serve as an example of what not to do!
It seemed silly to create a second method to perform the same logic as the first plus something extra, yet I wanted to be able to call the database API section of the program and seal off problems there. However, while this partially solved my problem, every method I wrote involved adding this verbose logic of checking if a transaction is already open, and committing/rolling back changes if my method opened it.
The problem was conceptual. I shouldn’t have attempted to embrace every possible scenario. The proper approach was to house transaction logic in a single method taking a second method as a parameter that would perform the actual database logic. That logic assumes the transaction is open and does not even perform a check. These methods could be called in combination so that these methods weren’t cluttered with unnecessary transaction logic.
The reason I mention this is because my mistake was to assume that I needed to make my method work in any situation. In doing so, not only was my called method checking if a transaction was open, but also those that it called. In this case, it is not a major performance hit, but if say, I needed to verify the existence of a record in the database before proceeding, I would be checking for every method which requires it when I should have just assumed all along that the caller should be made aware that the record should exist. If the method is called anyway, this is undefined behavior and you need not worry about what happens.
Rather you should provide plenty of documentation, and write what you expect to be true before a call is made to your method. If it is important enough, add it as a comment before your method so that there should be no mistake (javadoc provides nice support for this sort of thing in java).
I hope that helps!
You could have two overloaded functions. That way you can use both of them according to the situation.
You can’t never (I’ve never seen it happen) optimize for everything, so you got to settle for something. Choose what you believe is more important.
1
2 functions: one opens the recordset and passes it to a data analysis function.
The first can be bypassed if you already have an open recordset. The second can assume that it will be passed an open recordset, ignoring where it came from, and process the data.
You have both performance and re-usability then!
1
Optimisation (besides micro-optimization) is directly at odds with modularity.
Modularity works by isolating code from it’s global context, whereas performance optimization exploits global context to minimise what the code has to do. Modularity is the benefit of low-coupling, whereas (the potential for) very high performance is the benefit of high-coupling.
The answer is architectural. Consider the pieces of the code that you are going to want to reuse. Perhaps it’s the price calculation component, or configuration validation logic.
Then you should write the code that interacts with that component for reusability. Within a component where you can never use just part of the code you can optimise for performance since you know no one else will be use it.
The trick of it is determining what your components are.
tl;dr: between components write with modularity in mind, within components write with performance in mind.
2