One part of the debate on what technology to use for the National Broadband Network is rather lacking: reliability. We can talk about speeds, capabilities, & costs all we want, but without addressing this basic problem with the current network, we’re headed for certain disaster.
Back in 2000, Australia was at a point where the network was basically falling apart under the new data services. It became apparent that the network was not fit for purpose when it comes to data services. In fact, when I worked for Telstra in 2004, there were two wireline data services available: unreliable ADSL(1), & far more reliable ISDN2.
While ISDN is inherently more reliable due to the higher voltages (110v vs 50v for PSTN), the thick copper used (generally 0.60mm & above) makes a big difference. I remember assigning cable for ISDN lines, they just would not assign to a pair less than 0.60mm in diameter. Meanwhile, ADSL would assign on basically anything, you just had to do a “distance calculation” to see if it would work. No guarantees, just assign, program, & hope the tech does remote end testing.
I digress, in 2000 the then Howard government set up the Telecommunication Service Inquiry, which one of the many recommendations was to set up the Network Reliability Framework (NRF). Faults in exchange service areas (ESAs) were reported back to the Australian Communication & Media Authority (ACMA). Telstra is obligated, as a condition of their license, to report on fault conditions nationwide.
While this does not report on data faults, there is a clear indication that Australia’s copper is far from stellar, even for the (relatively) low frequencies of ADSL1/2. The main problem with the NRF is that it only shows Customer Service Guarantee (CSG) eligible service fault data. Unfortunately, most data services do not fall under this, & all ‘Naked DSL’ services do not fall under the CSG. So NRF does not capture this data at all.
So what can we take away from their data? While things have improved since 2000, there are still major problems with some ESAs, with 3rd parties having the most difficult time servicing these areas. In fact, the most recent data (2011) shows iiNet sitting at 52% of new service connections meeting CSG time-frames. This does not bode well for a VDSL2 Australia.
Getting actual information on data faults is becoming increasingly more difficult, which leads me to believe things are getting worse, not better.
Looking at Telstra’s cable remediation (Level 3) for just voice services in the 2011 NRF report, it’s clear that a lot of work is required on the network. There were 480 requests for remediation (Level 3), Telstra were allowed to ignore 200 of them that year. If we look through previous years reports we see much of this. In fact, one year clearly explains why there’s so few Level 3 faults reported post-2006:
While there were less individual services contravening the performance thresholds for Level 3 NRF reporting in 2007–08 by comparison with the previous year, changes in reporting arrangements from 1 October 2006 appear to have contributed to this fall.
How much cable is out there & not remediated due to ACMA rulings? How much is actually reached Level 3 (remediation), but due to a change in reporting mechanisms is not counted as Level 3?
We just can’t answer these questions, which leads me to believe any FTTN policy is low-balling the cost estimates of remediation. I’d hazard a guess that the number of data faults out there is far in excess of 50% of the total cable out there, as most data faults on an straight PSTN service won’t be noticed by the subscriber. Hell, only unworkable (no dial-tone, intense cross talk/static/tones, etc, AND you have to have no mobile service or mobile phone) & new phone services are counted under CSG. Even then, many providers ask users’ permission to exempt the fault from the CSG.
So essentially, the data collected by NRF can be seen to be lacking in many areas. To get data on non-CSG & data services will be difficult at best. The only way to truly know is to test every pair, & this is what Turnbull has stated he will do.
This has to be one of the most amusing statements I’ve seen a politician make. As someone who has had a number of data faults (& I actually have one now, 1/5th of my connection speed disappeared overnight), this is no easy task.
Firstly, the number of data techs around is minimal, we’re talking a hand full per Field Service Area (FSA, Telstra has 44 of these). They are paid well, & used rarely due to their need for corporate data services. Then there’s the number of man-hours required to complete this task. Just to test all 8 698 000 lines slated for FTTN we’re talking about (minimum) 1 hour per service, so just shy of 10 million man-hours just to test the services.
But wait, testing active services only isn’t going to solve problems further down the track. If we take it as ~4x the pairs required to keep services fault free, that quickly blows out to around 40 million man-hours. Yes, that’s around 900 000 man hours per FSA for a hand full of techs to do. Even if there’s 100 data techs per FSA, that will still take around 4.5 years to complete.
This is one reason Telstra doesn’t do audits of their network, there’s just not enough techs or time to do so. Telstra rely on the Customer Network Improvements Process (CNI) & correct cable records to resolve faults. The key is, much of the data in NPAMS (Telstra’s cable asset management system) is false. Pairs marked faulty are good, pairs marked good are faulty, pairs don’t terminate where they should, services have each leg split over 2 pairs, & worst of all, bridge taps, the list goes on.
Even if the data in NPAMs is correct, the CNI process has many outs, & doesn’t take into account data faults as “unworkable”:
Unworkable fault criteria are:
- an absence of dial tone or ring tone;
- an inability to make or receive calls;
- disruption to communications because of excessive noise level;
- repetition of service cut off; or
- any other condition that makes the service wholly or partly unusable.
Don’t forget, the last line is ONLY for phone services.
One thing to take note of is that elsewhere in the world, service providers are deploying FTTH to circumvent poor cable & continual faults. Even today, Bharat Sanchar Nigam Limited (BSNL) have announced they’re going FTTH due to massive problems with cables in Kochi, Kerala, India. In the US, Verizon are deploying FTTH to Fire Island after Hurricane Sandy wiped out the copper network. Every day I receive at least 10 emails stating similar in countries around the world. It’s no wonder that FTTH is predicted to make up 19% of services worldwide by year end, & DSL new subscriptions have dropped 50% year on year.
With the frequency of destructive storms in Australia, we have to question any reliance on copper to deliver high speed broadband services, especially with how little we know about the quality of the network when you’re firing 17Mhz VDSL2 down the line.
The key to understanding why FTTH is the best alternative isn’t just about download & upload speeds. There’s security, stability, fault tolerance, the ability to monitor services for precise diagnosis of faults, & most of all, the rapid turn-around for moving services. There are no lines to connect, no faults to resolve on a new service, just plug & play!