OO Project Problems


I have been a member of many C++ and Java projects over 14  years. The project sizes have varied including 2,3, 6, 15 and 50. I have noticed some common problems that always seem to occur which substantially increase the cost and reduce the reliability of the systems. These problems are avoidable but project directors seem incapable of effective action to manage these risks and the project planners have inadequate flexibility in  scheduling resources. Also surprising was that the quality was largely unrelated to project cost. When projects were sub-contracted to a software or project management consultancy then the policy of the 3 wise monkeys seems to dominate. Knowledge of a risk of severe operational difficulties for the end-client would not be communicated to the client.

The OO Software Development Lifecycle

An object-oriented approach moves much of the software development effort up to the analysis phase of the lifecycle It is sometimes disconcerting to spend more time during analysis and design, but this extra effort is more than compensated by the faster and simpler implementation. Object Oriented Modelling and Design / James Rumbaugh et al. 1991 p 146
A larger fraction [50%] is spent on the design phase; a comparably smaller fraction [30%] of the overall time is spent on implementation and testing. Designing Object Oriented Software / Rebecca Wirfs-Brock et al. 1990 p 8

Many programmers, when presented with a clear error-free design, are capable of extraordinary rates of code production. It is my experience that many of the project problems would have been resolved by spending more time on analysis and design.

A defect introduced during requirements specification will cost 100-1000 times more to correct when the system is in the testing phase than it would cost to fix during requirements specification. Object Oriented Software Engineering / Ivar Jacobson et al. 1992 p 467

It is my experience that the requirements specification contains errors and omissions. The errors could have been reduced by the use of analysis according to Jacobsons OOSE method or Rational's RUP process. I also found that since the requirements were written by senior staff, they were not subject to critical review by other suitably skilled persons. Non-functional and operational requirements need more design attention than is usually given.

Programmers are versatile and are paid by the hour (or up to £1250 per day for a sub-contracting company), so all the errors and difficulties are worked around at extra cost for the client.

Implementation bad habits

A very common mistake is a failure for the designers/programmers to recognise when it is appropriate to use the design patterns as detailed in the book Design Patterns. Most large projects should use almost all of these patterns. The essence of most design patterns is the separation out of aspects of an object that are likely to change or be customised.

Domain specific modelling is not usually performed in enough detail. More time should be spent with the domain experts teasing out the inheritance, relationships and behaviours of domain specific objects and concepts.

Implementation reuse by means of inheritance and delegation was not used enough. Instead the source code was copied and then edited. I have seen up to 7 classes of FileDialog created using the cut and paste technique. The use of Jacobsons modelling technique of distinguishing Interface, Control, and Entity objects would help to permit greater implementation re-use.

Compile and Link dependencies between source modules can quickly become unmanageable. In some small projects this results in requiring a complete recompile and relink of all the source to get a usable executable. In the extremely large projects, it may take a week just to get one clean integration build. It helps to use design patterns to separate interface from implementation and also to allow a single object to present several smaller interfaces.

Inter-object "ownership" and "references" relations quickly become confused. The result is that the responsibility for deletion of an object is not implemented and the executable process leaks memory. When you try to start deleting objects, you sometimes find that it is not possible to determine what other objects still reference it; then you get crashes. Memory management is such a chore in C++ that much of the time you should not use dumb pointers. (Use smart pointers or Java (Why Java?)). It becomes almost impossible to manage memory when all the exception and error code paths have to be considered.

Performance problems

There is often a performance problem that only becomes visible when collection sizes grow. These are not difficult to find and fix provided that the testing specifically takes collection size factors into account. I have found that the system architect or the designer should check for potential architectural speed limitations and also prototypes of the most important use-cases should be instrumented and timed to characterise the delays and throughputs. The programmer should not usually attempt to optimise before it has been shown to be necessary by the testers. Optimisation is a prime cause of error. A revised design may produce larger gains than merely altered code. If optimisation is ever performed by the programmer, the non-optimised version should also be available switched at run-time or compile time. Screen redraw time is often a limiting performance factor. It can be improved by using incremental asynchronous update techniques, but multi-threading and object dependency tracking is an issue which must have been considered at design time.


Most programs require that 2 or more activities occur concurrently. It is essential that issues of dependency between objects and resource locks are fully considered at the design stage. All projects that I have worked on suffered from update/locking problems at a late stage.


All projects that I have worked on said that they would perform reviews. In practice, review is just not done until it is too late to avoid disaster. Review the design before you write the code. Re-review everything, including concept of operations, requirements, tools, processes. An error or ommision is still an error even if discovered after signoff or after a phase has been nominally been finished.

Solution Policy

1) Use a software development process that is designed to reduce the possibility of errors or ommisions or thoughtless requirements (e.g Rational Unified Process RUP).
2) Ensure traceability  of artifact features to previous design or requirements documents. It must be possible to quickly follow such links.
3) Make all documents and artifacts available and annotatable by anyone capable of spotting an error or ommission (originators, developers, testers, users).
4) Encourage a culture of actively hunting down, and eliminating any error or ommision or thoughtless requirement that might produce a  project failure.
5) Use an electronic bug-tracking system open to all participants and covering all artifacts and processes. Issues must not be lost or be unassigned.
6) Inaction, or failure to research or resource a needed action, should not be a comfortable option for participants.

[Cycom][OO Consultancy][More IT risk reduction tips]