An optimization problem is the problem of finding the best solution from all feasible solutions — well that’s what wiki says about optimization.
For most part in the programming realm optimization feels like an Art and a balanced form of choosing what to optimize and what not to. When to optimize and when not to.
Errr What Again ?
alright let’s try that again, for me optimization is not the holly grail and depends on what you want to achieve at the end of the day. It often leads to self sabotaging if not executed carefully.
Often premature optimization can lead to
or is a result of
- Less clear code.
- Poor code architecture/arrangements.
- Less secure coding.
- Wasted programing hours.
Fortunately we live in a world where most of the technologies do not need optimization and are often tuned to perform in most of the cases right out of the box. Because of that most of the system may not need optimization for extended periods of time.
About 97% of time you will not need optimization and for the rest 3% one should only be concerned about it if this is something that can be quantified. Quantification can be anything from response time, cost incurred , CPUs being used , RAM requirements , thread requirement and anything in between.
Often these quantification are done with profiler, but that should not always be the case. If you don’t have access to one, chances are your engineers are already aware of where performance bottlenecks are.
Keep in mind there will be instances where bottlenecks within your program will appear in areas of your code you would have never thought and having a Profiling step is going to help you in the long run.
The Idea of optimization essentially is to avoid early micro-optimizations. However optimizations on larger scale i.e macro-optimization (things like choosing an O(log N) algorithm instead of O(N²)) are always worth the effort and should incorporated in early stages.
A good example is choosing which data structure you want to work with when using redis as a cache. There are tradeoffs for each data structure and your decision will lead to making compromises based on the results you want to achieve at the end of the day.
I think this is the easy part, and something have been discussing for a while now. So to summarize I often follow a three step rule to determine when optimization is something we should consider doing.
Get the code working.
Verify that the code is correct.
Make optimizations only when it can be quantified ( in context of making it faster, cost optimization, resource optimization etc ) — this is often done after usage of a profiling tools suitable for the technology in question.
The evaluation for our third steps should start with defining clear goals such as performance threshold of what we want to achieve. Once that is done we can select a plan of action and determine root elements behind the need of optimization and approach it accordingly.
We need to understand every technology is different and will need a different strategy and approach towards optimization. There is no silver bullet to achieve that and can happen on different levels of granularity.
The Ideal approach would be to avoid the need of optimization all together by including easy to adapt best practices.
For Example :