In our PCI DSS Requirement 10: Part 1 post on logging, we touched upon the importance of time synchronisation between servers. Here’s what PCI has to say about it:

10.4 Using time-synchronization technology, synchronize all critical system clocks and times and ensure that the following is implemented for acquiring, distributing, and storing time.

Note: One example of time synchronization technology is Network Time Protocol (NTP).

10.4.1 Critical systems have the correct and consistent time.

10.4.2 Time data is protected.

10.4.3 Time settings are received from industry-accepted time sources.

As mentioned in the PCI document, NTP is a mature, secure and reliable tool for this job. On the face of it, it looks as simple as just installing NTP on each machine in your in-scope environment. However, if you look at the Testing Procedures section of the Requirements document you find some additional pointers as to what is expected:

10.4.1.a Verify that only designated central time servers receive time signals from external sources, and time signals from external sources are based on International Atomic Time or UTC.

10.4.1.b Verify that the designated central time servers peer with each other to keep accurate time, and other internal servers receive time only from the central time servers.

So it is clear from this that you should have specific servers which act as time servers and synchronise with a trusted source on the outside, and all your other internal servers synchronise with your designated time servers.

NTP Organisation

A quick note on the organisation of NTP servers.

NTP servers are organised into different strata. A stratum 1 server is a NTP server which is directly attached to a time keeping device (which is stratum 0). Stratum 2 servers send their NTP requests to stratum 1 servers, and stratum 3 servers send their NTP requests to stratum 2 servers, and so on.

If you’re wondering what a stratum 0 server is like, here’s one. They are generally not connected to a network directly which is why stratum 1 is the closest you can reach remotely.

image

From this you’d be forgiven for wanting to set your network’s time to sync directly from a stratum 1 server, but this is not necessary (and not really recommended in an effort manage the load on these busy servers). Stratum 2 or 3 servers can be just as accurate and are certainly as accurate as you’ll need if your primary aim is to have consistent times in systems logs for audit purposes (NTP is also required for kerberos which will be discussed in another post – stratum 2 or 3 is fine for that also).

NTP allows you to specify a number of different time sources to use as references. The advantage of multiple sources is that if any one of those sources is wrong, it will be rejected by the NTP client as unreliable and not included in the time calculation.

NTP Pool

Of course your NTP servers will need to get their time from “from industry-accepted time sources”. There are lists of stratum 2 servers we could use. However, a more fault tolerant way is to use the NTP pool.

There are a range of domain names provided by ntp.org which use round-robin DNS to allocate the IP address of a random time server in the pool. The geographically closer servers will provide the most accurate time and several subdomains are provided to group the servers into regions. To find the pool closest to you, drill down into the “Global” pool link on this NTP Pool Project page.

We use the following 4 domain names since we are based in the UK:

  • 0.uk.pool.ntp.org
  • 1.uk.pool.ntp.org
  • 2.uk.pool.ntp.org
  • 3.uk.pool.ntp.org

The use of at least 4 different pools is recommended. Each of these pools is a different subset of time server IP addresses. NTP takes the time supplied from your 4 allocated time servers and uses that information collectively to calculate an accurate time. No one source is considered authoritative, so the more sources the better. (Anyone who knows the internal workings of NTP will know I’m glossing over a huge amount here!)

Firewall Configuration

Remember Requirement 1.3 at this point.

1.3.3 Do not allow any direct connections inbound or outbound for traffic between the Internet and the cardholder data environment.

So you need to place your NTP servers in your DMZ. Your firewall will need to allow UDP communication between the NTP servers and the outside world on port 123. Your NTP clients, including from the cardholder data environment, will need to be able to reach the DMZ on port 123 also.

NTP Server Installation & Configuration

So onto the actual installation. You will want at least 2 servers to act as NTP servers for the rest of your DMZ and cardholder data environment to allow for some fault tolerance (NTP servers do not require expensive hardware – inexpensive ALIX machines for instance would be fine). Each of those servers will use the 4 NTP pool domain names as their time source.

Lets call our servers NTP1 and NTP2.

Each of these servers will effectively become a stratum 3 server to serve the rest of your network.

The installation is trivial on Debian:

apt-get install ntp

The same package is used for both NTP servers and NTP clients, it’s just the configuration that changes.

Next we edit the /etc/ntp.conf file and ensure 4 server lines read:

server 0.uk.pool.ntp.org iburst
server 1.uk.pool.ntp.org iburst
server 2.uk.pool.ntp.org iburst
server 3.uk.pool.ntp.org iburst

Also, if we look at the following text in the PCI Testing Procedures column:

10.4.1.b Verify that the designated central time servers peer with each other to keep accurate time, and other internal servers receive time only from the central time servers.

This tells us our NTP servers need to include each other as “peers” which effectively means they will use each other as back up time sources if they lose connectivity to the main NTP pool servers. Peer access is disabled by default so we need to add a restrict command to tell NTP to allow peer access from within the local network.

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

And so along with the server commands we can now add a peer command referring to the other NTP server. NTP1 would have:

server 0.uk.pool.ntp.org iburst
server 1.uk.pool.ntp.org iburst
server 2.uk.pool.ntp.org iburst
server 3.uk.pool.ntp.org iburst
peer NTP2

and NTP2 would have:

server 0.uk.pool.ntp.org iburst
server 1.uk.pool.ntp.org iburst
server 2.uk.pool.ntp.org iburst
server 3.uk.pool.ntp.org iburst
peer NTP1

Authentication

In the Testing Procedures quoted above there is another piece of the jigsaw we must address:

10.4.1.b Verify that the designated central time servers peer with each other to keep accurate time, and other internal servers receive time only from the central time servers.

This could be taken to refer to having the correct client configuration (which we’ll come to), but we can also ask NTP to perform some authentication so each client can be sure they are speaking to our designated NTP servers, and not a rogue server which we have been directed at via a DNS poisoning attack.

Note that it is the server which authenticates itself to the client – the opposite of how we often think of authentication.

There are a few ways of doing this but a simple and effective way is to use symmetric key authentication.

NTP provides a key generator. On the first NTP server, in our case NTP1, go to /etcand run

ntp-keygen -M

This will create a file linked to by ntpkey_md5_NTP1 which is a list of keys such as:

11 SHA1 79014a661cf2f11f2b2388a9713d48a7bdbb71ca  # SHA1 key
12 SHA1 45e4b6bab483673c9fc789bd7ff405674cbf7c32  # SHA1 key
13 SHA1 bb2357962d83123630876e0c62f3c9cfb6adf977  # SHA1 key
14 SHA1 2a99422f53e620ff8f8f197546bfefbbd238b950  # SHA1 key
15 SHA1 6cd0169c8961e4d2f308703b15b9aee7a730504d  # SHA1 key
16 SHA1 f7ef97bc95f06ba3613bbb8c8085e52cfee5ceea  # SHA1 key
17 SHA1 7cb33e97afe99681d5e83f50647ce1f0149c3993  # SHA1 key
18 SHA1 9de9515a7485b5feffa483bcf08976c53820aaf0  # SHA1 key
19 SHA1 b445b1133492ac69bac4c12ff439ddfc38377e1a  # SHA1 key
20 SHA1 d2f3d9c807dd438cdb48ca8cf9079be793984587  # SHA1 key

This file should be given 600 permissions.

Now in the /etc/ntp.conf file we can refer to this file and pick a key the clients will expect when they communicate with us. This can be done by:

enable auth
keys /etc/ntpkey_md5_NTP1
trustedkey 15

And on NTP2, we can copy this keys file to /etc/ntpkey_md5_NTP2 and in its configuration also specify the key:

enable auth
keys /etc/ntpkey_md5_NTP2
trustedkey 15

Of course, now when NTP1 & NTP2 try to peer with each other, we need to supply the correct key in the peer command line.

On NTP1:

peer NTP2 key 15

And on NTP2:

peer NTP1 key 15

Putting all that together, we now have a server config that looks something like the following (with comments removed to save space).

For NTP1:

driftfile /var/lib/ntp/ntp.drift

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

server 0.uk.pool.ntp.org iburst
server 1.uk.pool.ntp.org iburst
server 2.uk.pool.ntp.org iburst
server 3.uk.pool.ntp.org iburst
peer NTP2 key 15

restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

restrict 127.0.0.1
restrict ::1

enable auth
keys /etc/ntpkey_md5_NTP1
trustedkey 15

For NTP2:

driftfile /var/lib/ntp/ntp.drift

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

server 0.uk.pool.ntp.org iburst
server 1.uk.pool.ntp.org iburst
server 2.uk.pool.ntp.org iburst
server 3.uk.pool.ntp.org iburst
peer NTP1 key 15

restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

restrict 127.0.0.1
restrict ::1

enable auth
keys /etc/ntpkey_md5_NTP2
trustedkey 15

Client Configuration

The client configuration is fairly simple:

driftfile /var/lib/ntp/ntp.drift

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

server NTP1 key 15 iburst
server NTP2 key 15 iburst

restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

restrict 127.0.0.1
restrict ::1

keys /etc/ntp.keys
trustedkey 15

And remember to copy the keys file used by the servers to the client and chmod 600 /etc/ntp.keys.

Starting NTP Servers for the First Time

Getting NTP to run for the first time can be a little frustrating if you’ve not done it before. If the clock on the server you are installing NTP on is too far away from the correct time, NTP will not even attempt to rectify the matter. So it is a good idea to make sure your clocks has been well synchronised beforehand by running ntpdate.

That’s not the whole story though. If the system clock is running faster or slower than “real” time, NTP will always struggle to keep it accurate and frequently just give up. Fortunately Debian provides adjtimex which will adjust your system clock speed for you.

So a fairly safe procedure to start is the following:

  1. Stop NTP so you know you NTP will not be fighting anything else to adjust the time:
/etc/init.d/ntp stop
  1. Install ntpdate and adjtimex (the process of installing adjtimex automatically adjusts the system clock. It can be run subsequently using adjtimex -a if required):
apt-get install adjtimex ntpdate
  1. Run ntpdate against the 4 NTP pool domains:
ntpdate 0.uk.pool.ntp.org 1.uk.pool.ntp.org 2.uk.pool.ntp.org 3.uk.pool.ntp.org
  1. Delete the NTP drift file:
rm /var/lib/ntp/ntp.drift
  1. Uninstall ntpdate (it can conflict with NTP upon system restarts):
apt-get remove ntpdate
  1. Start NTP:
/etc/init.d/ntp start

What you should (eventually) see upon executing ntpq -c peers is something like this:

[david@ntp1:~]# ntpq -c peers
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntp0.cis.strath 140.203.204.77   2 u  422 1024  377   14.778   -5.160   3.966
+146.185.21.74   73.121.249.250   2 u  776 1024  377    5.215   -3.335   4.538
+82.113.154.206  193.62.22.82     2 u  914 1024  377    3.990   -3.590   3.848
+time.mhd.uk.as4 217.114.59.66    3 u  875 1024  377    8.671   -4.402   4.304
+ntp2.localnet   194.238.48.3     3 u  363 1024  377    0.462   -3.116   5.969

This is looking good. We can see the clustering algorithm has selected a peer for synchronisation “*“, and has others included in the selection set “+“. And looking at the associations everything looks acceptable:

[david@ntp1:~]# ntpq -c as   

ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1  5740  963a   yes   yes  none  sys.peer    sys_peer  3
  2  5738  943a   yes   yes  none candidate   reachable  3
  3  5739  9324   yes   yes  none candidate   reachable  2
  4  5741  9424   yes   yes  none candidate   reachable  2
  5  5742  f324   yes   yes   ok  candidate   reachable  2

If we see a server preceded by a space instead of a selection indicator, we might have a problem:

[david@ntp1:~]# ntpq -c peer  
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+b0ff54d9.bb.sky 81.2.117.235     2 u   57   64    7   17.979   45.717  11.663
+abby.lhr1.as411 142.3.100.2      3 u   54   64    7    8.573   45.868   2.163
*li153-120.membe 129.69.1.153     2 u   51   64    7    9.077   37.774   2.244
-time.shf.uk.as4 85.119.80.233    5 u   53   64    7   12.496   41.059   5.263
 ntp2.localnet   .INIT.          16 u   52   64    0    0.000    0.000   0.000

To help work out what is happening look at the associations:

[david@ntp1:~]# ntpq -c peer  
ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1  5850  943a   yes   yes  none candidate    sys_peer  3
  2  5851  9424   yes   yes  none candidate   reachable  2
  3  5852  963a   yes   yes  none  sys.peer    sys_peer  3
  4  5853  9324   yes   yes  none   outlyer   reachable  2
  5 34243  c01c   yes    no   bad    reject              1

We can then see this is an AUTH problem so you should double check your key files and the key selections in the /etc/ntp.conf file.

If all servers are showing condition reject this is time to check your system clock again.

One tip for debugging is to stop NTP, install ntpdate again, and run ntpdate -d against the time server which is giving you issues. This can shed a lot of light on why things are failing.

AND REMEMBER, nothing happens quickly with NTP. It looks at data over time and makes a judgement on the collective set of data. So if you’re pulling your hair out trying to get your NTP clients or servers to sync and cannot for the life of you see a config problem…. go and make a cup of tea. Check back a bit later or even the next day. There’s a every chance your problem may have resolved itself.

Starting NTP Clients for the First Time

You should follow the same process as for starting the server when starting the clients once the /etc/ntp.conf and /etc/nyp.keys are in place. The peer list on the clients should look something like this:

[david@client1:~]# ntpq -c peers
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntp1.localnet   194.238.48.3     3 u  594 1024  377    1.207  -18.777  85.800
+ntp2.localnet   176.74.25.243    3 u  288 1024  373    0.940  -92.180  88.362

And the assocations will show authorisation is OK and we have no rejects.

[david@client1:~]# ntpq -c as
ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 48909  f61a   yes   yes   ok   sys.peer    sys_peer  1
  2 48910  f41a   yes   yes   ok  candidate    sys_peer  1

Virtualisation

If you’re trying to get NTP to run on a virtual machine, I’m sorry but all bets are off. You should use NTP to sync the host server and make the guests sync from the host via whichever method is supported by the virtualisation software.

Now that we are sure all our servers are reporting the correct time, we can happily have a look at PCI DSS Requirement 10 Part 3 – Centralised Logging.