Network Synchronization in the PSTN

A brief tutorial

By Chris Smith

Table of Contents

The Early PSTN

Prior to 1924 in the PSTN (Public Switched Telephone Network) synchronization of signals was not necessary. In those days, it was the baseband signal that was sent from place to place. This meant no circuitry was necessary for decoding at the receiving end because the voice data signal was sent at its normal occurring frequencies  (< 4 KHz). This lack of carrier multiplexing (encoding multiple groups of baseband data at higher frequencies) meant that only one channel was available per line and large quantities of wiring was required to meet the growing communication demand. After 1924, the type C carrier line was implemented which provided three two-way voice channels above the voice frequencies over the same wiring that previously could carry only one channel.[1] However with the introduction of multiplexing, the carrier signals frequency at the receiver needed to be matched so that the baseband data channels could be recovered from the carrier signal. While the receivers frequency did not have to be exactly that of the incoming signal, it had to be close enough to recover the data. As the advent of newer technologies allowed more signals to share one line, also came the need for frequencies at the receiver that were more in synchronization with that of the incoming signal. But perhaps the most important need for synchronization is for the transmission of non voice traffic which cannot handle a loss of synchronization.

Why Synchronization?

In the early suppressed carrier systems, it was necessary that the frequency at the demodulator should match that of the carrier signal so that the baseband signals could be recovered without exhibiting an excess frequency shift. Tests performed as late as the 1940's showed that any shift of less than 2 Hz was acceptable for J and K carrier systems (both multiplexed 12 channels).[1]

Figure 1. K Carrier system

When Pulse Code Modulation (PCM) and Time Division Multiplexing (TDM) in the form of T1 carrier was implemented, a new set of problems emerged. Now the channels of the system were separated by time with a sample for each channel repeating every 125us.

Figure 2. T1 Carrier System

T1 requires a method of phase control so that the receiver can tell the difference between channels for each frame. If the receiver becomes out of phase with the sending signal, data samples could be applied to the wrong channel. Frequency control is also required between the sender and receiver so that the receiver can differentiate each bit for the channels. If the signals are not in proper sequence, data will be lost caused by what is known as slip. Figure 3 shows an example of a receiver whose frequency is higher than that of the sender. Notice that the receiver checks for samples too often and ends up with the wrong data.

Figure 3. Receiver frequency higher than the sender

When the receiver frequency is slower than that of the incoming stream, the receiver does not sample enough and as a result data is lost.

Figure 4. Receiver frequency is slower than the sender

Synchronization Implementation


Before coaxial cable was introduced into the PSTN (pre 1935) modulation frequencies were held to less than 0.5Mhz. Independent tuning fork controlled oscillators were used and able to supply accurate and stable carriers having a frequency accuracy of about 1 in 106. [1] However with the introduction of coaxial cable came a corresponding jump in usable frequencies along with a need for higher frequency accuracies. Better accuracy was accomplished by transmitting a separate synchronizing 64 kHz pilot over the line from the originating office terminal. The pilot was merely a 64 kHz signal that was transmitted to each offices primary frequency supply (PFS) so that the PFS could have a reference frequency. The PFS's function was then to provide reference signals for the local multiplex carriers and to regenerate the incoming synchronization pilot so that it could be passed on to the next office. The PFS was adjusted as required if its locally generated frequency did not match that of the incoming 64kHz pilot frequency.
PFS-1 was the early implementation of PFS and operated with the L1 (5 groups of 12 channels inside super groups of 10) coaxial system using a 64kHz pilot. This system was capable of maintaining an accuracy of less than 7 parts in 107. [1] However as the multiplexing supergroups grew with the introduction of L3 (3 master groups of 600 channels), the PFS-1 system was adapted to use a 308 kHz pilot. New high quality temperature controlled oscillators were required to maintain accuracy to within a few parts in 108.[1]
The errors stated are based on frequency offset relative to each other rather than a system wide absolute frequency. The system wide frequency accuracy was only on  the order of 1 part in 106 [1] but was acceptable to keep the pilots within their filter passbands. To maintain absolute frequency, a frequency standard was established in Murray Hill, New Jersey. This standard was periodically adjusted to match that of the national standard at the US Bureau of Standards and Navy. Murray Hill provided a 4 kHz signal to the Long Lines building in New York where the synchronization pilot was generated and transmitted to all other offices. The Long Lines supply was able to maintain an accuracy with Murray Hill on the order of a few parts in 109.[1]

Figure 5. Conceptual view of the network branching of the 64kHz pilot from the Long Lines Building


The PFS-1 method worked well for a while, but as the system expanded with the growing population, a pilot could be regenerated as many as 20 times. PFS-1 also used mechanical servo motors which moved a variable capacitor to correct pilot frequency differences at the terminal offices. A faster response and more accurate system was necessary. PFS-2 became the successor and utilized a phase locked loop which ensured zero frequency offset between the incoming  and regenerated pilot. If the incoming pilot were to be lost the PFS-2 would then run free at the frequency of the local crystal oscillator. PFS-2 was widely installed the 1960's and early 1970's.
Soon new problems began to arise as a result of the zero offset. Facility switching or maintenance could  temporarily interrupt pilots and introduce transients into the system. These transients would be propagated through the PFS carrier supplies where modulation would place the transients in the signal paths. The older slower PFS-1 systems had not been quick enough to respond to these pilot changes and were thus unaffected. While these transients were of little effect on speech, they could cause errors with data transmission.
To compound this, PFS-2 was redesigned to operate with the new L4 system which had a top frequency of 17.5Mhz. The error between two L4 PFS's had to be less than one part in 107 and the PFS-2 could not maintain this if the synchronization pilot was lost and the PFS was free running. [1] Also the number of  consecutive PFS's continued to grow. Finally with the new L5 systems the frequency accuracy requirements could no longer be met by the PFS's and it was apparent a new system was needed.


The jumbo frequency supply (JFS) was the approach taken for synchronization for L5 systems around 1974. It became a reference supply for different regions in the country. It consisted of three crystal oscillators which when free running only had a drift of 1 part in 1010 per day. [1] This allowed it to run for several weeks without adjustment. The oscillator handled transients from the incoming reference signal by quasi-frequency lock. Quasi-frequency lock (also refered to as plesiosynchronous) means that the compared frequency signals are maintained nearly synchronous. Cycles that were different would be counted with no correction made. When the count reached 256 a correction of only 2 parts in 1010 was made in the proper direction. When very large differences between the local and incoming signal occurred, the regional supply would run free. Regional supplies would normally run within 3 parts in 1010 .[1]
The JFS also needed an improved reference signal. A new Bell System reference frequency standard was implemented using three Cesium atomic clocks located in Hillsborough, Missouri which maintain an accuracy of a few parts in 1012 with the national standard. The reference signal was transmitted to the JFS's using coaxial cable and microwave radio. The JFS's then passed on the reference signal to the traditional PFS's.

Stratum Levels

The North American network is now modeled on four stratum levels. Each level refers to the accuracy of the oscillator in it. A clock in a stratum is able to phase lock with any clock in the same or superior (lower numbered) stratum. The primary reference source (PRS) is Stratum 1 and is the highest level with the best accuracy (1 part in 1011). This accuracy can only be met by a Cesium clock either onsite, via Loran-C or via global positioning system (GPS). [2] The list of strata requirements as stated in ANSI T1.101 -1987 are contained in Table 1.[3]

Table 1
Stratum Levels Accuracy
Stratum 1
1 X 10-11
Stratum 2
1.6 X 10-8
(.0025 Hz at
1.544 MHz)
1 X 10-10/day
Stratum 3
4.6 X 10-6
( 7 Hz at 
1.544 MHz)
< 255 slips on any
connecting link during
the initial 24 hours
Stratum 4
32 X 10-6
(50 Hz at 
1.544 MHz)

By ensuring that all Stratum 1 oscillators are extremely accurate and are matched to the same reference (a world standard) different networks that contain separate Stratum 1 sources can be connected without frequency synchronization problems.  The CCITT Rec. G.811 recommended that a primary reference clock be used for international switching centers. The clock should not have a longterm frequency departure of greater than 1 X 10-10 and should use Coordinated Universal Time (UTC) as its reference.  Using this method the theoretical slip rate on any 64 kbps channel should not be greater than one in 70 days. (Note that this slip rate is based on undisturbed conditions.)
In this way connection between separate networks would not require transferring timing information. Each network controlled by its own Stratum 1 clock should ensure that their connection  together is synchronous because they are timed to the same reference clock. This is also the method used to connect different networks (eg ATT and MCI) and is shown in figure 5.

Figure 5. Stratum level network

Timing information is distributed through this network using the T1 carrier signal. The timing information can be derived from the framing rate or bit rate of the signal since it is known to have a 1.544 Mbps transfer rate. It can be framed as all 1's or carry traffic but either way is traceable back to the Stratum 1 clock signal.


[1] "A History of Engineering and Science in the Bell System ,Transmission Technology (1925-1975)", Prepared by Members of the Technical Staff, AT&T Bell Laboratories, 1985.

[2] J.W. Pan, "Present and Future of Synchronization in the US Telephone Network", IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, pp 629-638, Vol UFFC-34, No.6, November 1987.

[3] Roger L. Freeman, Reference Manual for Telecommunications Engineering, 2nd ed., John Wiley & Sons, New York, 1993.

K. Okimi and H Fukinuki, "Master Slave Synchronization Techniques" IEEE Communications, May 1981 pp 12-21

This page was last updated in April 1998 by Chris Smith