More complex software-systems correlate with higher lead-time (time-to-market of initial idea to user-available software) and fragility. They also tend to have negative influence on usability. Therefore it must be a goal to reduce following complexity factors to a lowest possible degree.
Independant of what the codebase does it incorporates maintenance-efforts: On big codebases it takes longer to implement changes, more code needs to be read/comprehended. Codebase size also correlates with longer builds, deployment processes and startup times, which means a latency-increase of feedback loops. Encapsulation/Modularizaton can be applied, but it only weakens the impact: You won’t be able to reduce efforts of a 100M codebase compared to a 10K one by just having good code-quality.
Decent code-quality is important for implementing changes quickly (readable code, separation of concerns, DRY etc.). Good quality also includes test-coverage so regression-bugs are better caught and there is a safety-net during refactorings. Though business often neglects less visibile code-quality, it is essential to reduce lead-time. In severe cases you’re at a dead end: code is such in a bad shape, that it is impossible to evolve (changes are too side-effect-risky and/or too expensive).
Itself diversity of tools (programming-languages, frameworks, hardware etc.) is good because you can choose the right one to solve your specific problem. But it also increases risk: single-person-know-how, legacy-tech, learning-curves, beta/buggy-stability, upgrade/patching-efforts, transitive dependencies (see .dll, .jar nightmares).
Though distributed systems are necessary (scaleability, reliability, modularization, partner-integrations) they are more fragile: Monitoring and deployment efforts increase and security breaches are more likely. Also tracing, debugging and testing efforts are higher.
One of the biggest factors is the size of your organization because efforts increase squarely (see also Mythical Man Month). Bigger organizations try to fight this by introducing hierachies and heavy-weight processes, but this structure can have negative impact on “short/quick” decisions and slows down speed. In some scenarios political-games emerge, which block progress considerably.
External-partner-integrations (most likely over APIs, batch processing) need more investment: Different release cycles need to be taken into account, compatibility must be offered and dedicated monitoring needs to be setup. Also communication is less direct (fixed calls, meetings, travelling).
User-Base + Traffic
A bigger User-Base means you have to spend more effort on support. Because of the mass statistically more edge-cases come up and need to be handled by software. Also scalability requirements need to be implemented by more sophisticated production environments. Popular systems also attract criminal activity, which you need to respond with higher security-investments, which again make the system less usable. Big applications also produce more data, which needs to be maintained (compatibility, migrations, analyzation).
Being Complexity aware
Above factors cannot be reduced to Zero (you will always meet an Inherent Complexity), but you should fight back:
- Prioritize, prioritize, prioritize: Featuritis has bad impact on usability and codebase-size. Implement important features only. Remove unneeded features and cleanup code.
- Consolidate technologies: Don’t introduce a new technology just because it was praised in the last magazine. Evaluate it and maybe use it privately first.
- Take Fowler’s advice serious: “Don’t distribute, if you don’t have to”
- Don’t always increase team-size, rather have a small team with A-players.
- Quantify: Use metrics for business success, code-quality and production stability. Especially for bigger systems gut-feeling isn’t enough.