Complexity?
Software Developers tend to think in terms of code, Database Administrators in schemas, Operational teams in configuration and Systems Teams in hardware and networks. A software architect needs to consider all of these and in particular where the risks and complexities are.
A (non-trivial) system will involve the following:
- Data
- Code
- Configuration
- Virtualisation
- Hardware Servers
- Network connections
- Physical Location attributes
- Supporting Infrastructure (from UPS to fire suppressant)
These all interact and affect each other and will each have a certain level of complexity. Complexity is difficult to measure, although many attempts have been made. These include Function points in your requirements, Cyclomatic or Halstead complexity metrics in code, complexity class for algorithms, various statistical measures on data and even the application of graph theory for your networks. Different parts of your systems will use different metrics to measure this complexity so you can't compare them - is the network that connects a grid of servers more complex than the hardware setup of each server and is this less complex than the software running on each of the nodes? The only meaningful metric for comparison is probably based on cost required for design/implementation/maintenance.
Reduction or Displacement?
Sometime tools that reduce complexity in one area simply push it into another. For example, tools that 'magically' create database schemes from classes might make the code simpler (removing the DAO/mapping code) but if the schemas they create are a mess then the complexity has just been moved or been temporarily hidden. The same is often true between configuration and code - dependency injection systems such as Spring may make the code more readable and modular but at the expense of complex xml files.
Some frameworks use convention based, implicit configuration instead of explicit configuration. This reduces the amount of user-defined configuration but has this reduced the complexity or just hidden it within a tool?
Virtualisation is a wonderful technology but again this largely moves the complexity - from hardware into software. There are benefits for modification management and turn-around time but you still need an expert who knows what they are doing (the skill-set has just changed).
How do we react to it?
One of the dangers that software architects face is that their background can determine how they perceive complexity and how they deal with it. It is very common to push complexity out of the area they know and into another because it initially appears to be reduced. In reality it is still there but is now residing in part of the system that is less well understood.
Conversely, there can also be a temptation to try to solve all the systems issues in the part of the system that the software architect understands best. However this can add complexity because the problems are not being solved in the most appropriate place. For example I've seen systems with large amounts of code to do a job that could be achieved with a simple action in a database.
If you are working in a organisation which has strong silos there may be a temptation to push the complexity from your part of the organisation into another. Again the complexity is still present but is now in an area where you may not be able to affect change. I have seen projects where responsibilities and complexities have been pushed out of groups but then hurriedly brought back in when problems need to be solved.
Ultimately we need to accept that complexity exists in valuable IT systems but we should make sure it is in the correct part of the system and not try to hide it.