What is your view of “method-based” delay analysis? Indispensable? A good dinner topic? An elaborate deception? Many in our industry still take the latter view.
Why do people remain sceptical (to put it mildly) about the use of delay methodologies? One reason is that we still hear stories about analysts disappearing into a theoretical world, disconnected from reality or sense. So the question keeps coming up: are any of the methodologies actually better than a “common sense” approach? The question arose again recently in an Australian court case – White Constructions Pty Ltd versus PBS Holdings Pty Ltd.
Here, a housing developer (White) engaged sewer designers and water services coordinators (“the designers”) to produce designs for a sewerage system that would be approved by the relevant authority, Sydney Water. The initial designs failed to get approval. Although a redesigned system was later approved, White sued the designers for failure to produce compliant designs within a reasonable time, causing (White said) additional payments to contractors of AUS$1.9 million.
Note that this action was not a traditional delay/extension of time claim; it was for recovery of general damages from designers. Nevertheless, the same basic questions had to be answered, namely, did the delay in producing a compliant design cause delay to the completion of the build and, if so, by how much?
Each party engaged a delay expert to produce a report. White’s expert used an “as-planned-versus-as-built windows” analysis which concluded that the design issue had caused a delay of 240 days. The designers’ expert said that the delay was only 19 days, using a “collapsed as-built” method.
The judge rejected both reports, describing them as “impenetrable” and he commissioned his own report, stating that the answer to this question required “close consideration and examination of the actual evidence of what was happening on the ground”.
The judge also noted that both parties’ experts had used methods recognised in the Society of Construction Law Delay and Disruption Protocol (SCL Protocol). He decided, however, that such recognition did not confer any special status on those methods as the only appropriate ones to use, and nor was any other method disqualified by not being mentioned in the protocol. He said that “the Court should apply the common law, common sense approach to causation”.
Having accepted both the method and the conclusions from its own appointed expert, the judge dismissed White’s action on the basis that White had not produced evidence of actual delay to specific activities, critical to works completion, as a result of the design delay.
Many commentators are presenting this decision as a blow to “method-based” delay analysis and to the SCL Protocol – echoing the criticisms I mentioned above that, at best, all such methods are impenetrable. At worst, they are “Dark Arts:” intentionally made opaque in order to bamboozle ordinary construction folk and to maintain an industry of experts.
Let’s consider whether this decision supports that view. I have not read any of these three delay reports but I agree that the litigants do appear to have fallen into some familiar traps, at least in the judge’s view. Neither of the parties’ reports was clear enough in its logic or sufficiently connected to the underlying facts, to convince the judge to rely on it. Most stark was the fact that the two different methods chosen to analyse the same facts (both of them “recognised” methods) should produce such wildly different answers. One assessment of the delay was just eight per cent of the other one. For supporters of methodological analysis, that result is “not a good look”, as the Australians say! It can only bolster the sceptics’ views that this is just an expensive game.
But in fact, White’s claim failed for simple lack of evidence: the alleged critical path delays, as shown on White’s report, were assumed consequences, not backed up with sufficient records of the actual consequences.
The SCL Protocol would agree that a purely theoretical analysis should be rejected. It has extensive guidance about the need to keep records in order to evidence delay impacts as matters of fact, not theory. The protocol is also clear in not qualifying or disqualifying any approach. It gives helpful guidance as to which one might work best, depending on what material there is to work with – records, baseline programme, updated programme etc.
The judge’s complaints about the parties’ delay analyses reflect one of the main reasons for writing the protocol in the first place. It gives us (as non-programmers) some means of assessing the relative suitability of proposed methods.
The SCL Protocol is there precisely to open up the “black box” of delay analysis, the better to facilitate a “common sense” approach.
* Stuart Jordan is a partner in the Global Projects group of Baker Botts, a leading international law firm. Jordan’s practice focuses on the oil, gas, power, transport, petrochemical, nuclear and construction industries. He has extensive experience in the Middle East, Russia and the UK.