G.Fast: business rationale remains unclear for copper 2.0
A fundamental problem for operators wishing to roll out FTTH is the problematic final few metres of laying the fibre, which account for a large majority of the overall costs of rollout. For example, Swisscom claims that 80% of the overall cost of an FTTH deployment is in the section between the plug in the subscriber’s home and the manhole in the street, which is perhaps 150m of drop cabling away from the subscriber’s home.
The promise of G.Fast is to enable operators to offer FTTH-like speeds of up to 1Gbps but without rolling out fiber in the troublesome final metres. The aim of G.Fast would be to leave a maximum of 200m of copper. At the same time as shortening the copper loop further than today’s FTTC deployments G.Fast promises a number of innovations that would enable operators to increase the maximum speeds they can offer to end customers.
Present day VDSL2 systems can work using a maximum of 30MHz of spectrum (although it is really only with systems supporting up to 17MHz that large volumes have been deployed). G.Fast promises to significantly increase this to perhaps somewhere between 70-140MHz. DSL systems work by dividing the data into various sub channels or tones which contain up to 15 bits, using so called discrete multi-tone modulation. G.Fast would aim to move beyond this maximum of 15 and therefore provide higher speeds. Further improvements in coding and modulation would also enable significant gains in speeds. The overall aim of these advancements would be to define a protocol that can support an aggregate upstream and downstream capacity of 1Gbps, although in practice the aggregate in the field would be somewhere above 500Mbps.
Significant amounts of work need to be done before G.Fast can become a commercial reality. A timeline of perhaps four years might be required before the technology becomes commercially available. The technology will need to be standardised for operators to accept it and reduce the risks of vendor lock in.
The ITU began a project on G.Fast in February 2011 and Alcatel Lucent states that it expects to demonstrate a proof of concept in its labs next year, having started research on the technology in February 2010. Another part of the work in the standardising of G.Fast will be in ensuring the provision of power to the remote device in the field, say 50m away from the subscriber’s home, by using the residential gateway.
The timeline of four years is logical in the sense that operators have already begun to deploy one type of technology for improving the speed and reach of VDSL2, pair bonding, which involves bonding two copper pairs together. VDSL2 vectoring, which involves eliminating cross talk or interference between different VDSL 2 lines, is likely to become commercially deployed towards the end of next year or at the start of 2013.
Operators could further increase speeds and move to what you might call Next Generation Copper 2.0 by deploying phantom mode, or creating a third virtual pair from 2 physical pairs and/or then moving to G.Fast. In other words, there would be a potentially clear migration path from bonding, through vectoring, then perhaps through phantom mode and to G.Fast. This technology migration would be logical in the sense that fibre would be being continually brought closer to the customer. Certainly this would be an attractive prospect for vendors who would be able to sell a variety of different technologies to their operator customers instead of them merely buying a GPON system and not moving beyond that.
Vendors also believe that the development of G.Fast would not lead to a level of technological complexity so great as to not allow for commercial deployment. One issue that Alcatel Lucent does note however is that at the moment there would be a big difference in complexity levels between deploying G.Fast using 70MHz or 140MHz of spectrum and using 280MHz. In any case, many operators will wish to be certain the technology has reached maturity and avoid the problems that early adopters of VDSL1 faced.
The problem, however, with G.Fast is simple: is there really any demand for the in the field aggregate bandwidth of 500Mbps that G.Fast could provide? Vendors themselves admit that at this time it is unclear what the commercial applications for such bandwidths will be. Certainly residential broadband access is unlikely to move to a world where large numbers of residential consumers are demanding hundreds of megabits per second downstream connections, even in four years time.
Many incumbents are moving to next generation access in response to competitive pressure from cable operators. But it might prove difficult to offer aggregated download and upload speeds of hundreds of megabits per second over cable networks, even using innovations such as eight-channel downstream bonding and analogue refarming. This would thereby limit the need for incumbents to roll out G.Fast in order to compete effectively with cable operators.
Perhaps new applications will appear in the next few years that require the bandwidths that G.Fast can deliver and that vectoring and bonding cannot. Just in case such applications do appear G.Fast does at least give operators another option to examine on how to increase the speeds they offer over their networks in the future.