Some (old) network revision notes

Network Revision – TCP/IP etc.

This module is (mostly) taken by two types of people – those who have done a networking course and those who will soon. (There is also sometimes a minority who haven’t and won’t – don’t worry if this is you.) Those of you familiar with networking to some extent will be aware of the concept of a layered model – an architecture in which layers of software and hardware each undertake different aspects of the communications task. This layered approach is typical of the hierarchical view we tend to take of software (and hardware) systems – largely in order to control complexity. It is much easier to build systems by concentrating on one aspect of a process and devolving others to separate software layers (by calling a function, or invoking a method). In a layered network model, software (or hardware) in each layer believes it is communicating with other software or hardware at the same level of abstraction at some remote location. In practice, except for the bottom-most layer, is is usually communicating with the software (or hardware) in the layer immediately below it on the same, local, hardware that it itself is located.

2.1. Introduction

Although the most developed layered model is the seven-layer OSI reference model, the most-widely used is the simpler TCP/IP – Transmission Control Protocol/Internet Protocol. TCP/IP has become the dominant protocol largely by being the first. It is not without its problems. For example, limits on the number of available IP addresses in the most widely-used version (4) at the moment – though this is addressed in the new IP version 6 which is currently rolling out. (Version 5 was not adopted – it was designed to conform to the OSI model.)

2.1.1. TCP/IP

We’re not going to go into TCP/IP (or other protocols) in too much detail in the notes: though you should do some background reading in this area as it is obviously important.

TCP/IP consists (arguably) of four layers:

· Application. This is the top layer, and is where ‘user-level’ protocols like TelNet, FTP, POP and HTTP are run.

· Transport. This layer implements reliable ‘end-to-end’ (i.e. source machine to target machine) communication, without worrying about how the data is moved (e.g what route it takes, over what form of communications medium, and using what lower form of communication protocol). This is the ‘TCP’ part, which is concerned with things like error checking, and requesting retransmission of faulty data (to ensure that the application layer above is guaranteed error-free data).

· Network. This layer is responsible for routing data from the source node to the destination node. It does this by checking the unique four-byte IP address (becoming 16 byte in IPv6) of the destination, and using stored routing information to decide which network node to forward the data to, to attempt to ensure that data arrives at its destination. This is the ‘IP’ part. In principle, it is possible to use TCP without IP, or vice versa. For example, the Transport layer could quite happily use TCP in conjunction with a Network layer that used something completely different. In practice though, TCP and IP are almost always used together except in intermediate network hardware (hubs, switches, routers) which completely lack the higher layers.

· Link. This is the network hardware (routers, cabling, network interface cards etc.) and low-level driver software that converts to and from IP for transmission over whatever physical network we are using. We will not be concerned with this much.

2.2. Addressing

A clearly-important aspect of networking is addressing – that is, mechanisms for locating and identifying sources and destinations. TCP/IP uses three different levels of addressing.

· Physical (link layer) addresses. This is the lowest level of address, and its format is fundamentally dependent on the underlying network hardware. For example, Ethernet uses a 48-bit unique address that is hard-wired into each network interface card (the Media Access Control (MAC) address).

· IP addresses. The next level of addressing is the IP address – commonly mapped using DNS to a more readable domain name. Currently, IP addresses are usually four bytes long, which on the face of it permits over four billion hosts. In practice, as we will see, many of them are unusable and in practice they are running out – hence the expansion to 16 bytes in version 6. Commonly, it is assumed that IP addresses belong to hosts (i.e. computers). Strictly they belong to network interfaces. So if you have a machine (computer or increasingly a dedicated switch of router) that is connected to two networks (e.g. a small LAN and an Internet connection – dialup, cable, ADSL, whatever) it will have two IP addresses. (Initially, IP was intended as a protocol for communicating between networks – hence the name – and not internally within them. In such cases, each host would not have an IP address – only the gateway nodes between networks.)

· Port addresses. The highest level of address is a port, which identifies a specific application in practice, rather than a host (or network interface). Commonly, we are interested in communication between applications, not just machines. Ports are 16-bit numbers, and the combination of a port and an IP address (or a host name) is sometimes (though not strictly correctly) called a socket – a fundamental concept in networking we will study later.

2.2.1. Sockets and Ports

sockets and ports are often regarded as identical, but strictly a socket is a software construct attached to a port on a specific host – and there can be multiple sockets associated with each port. (That is, if you want to use the same port – virtual communication device – for several applications simultaneously, you can attach multiple sockets (one for each application) to the port and communicate quite happily without conflict (though of course, the more we are using at a time, the worse network performance becomes). So sockets are sort of ‘virtual virtual communication devices’.

Each machine that implements TCP/IP has available 64K ports. Each port is identified by a 16-bit address. In reality, they share the same physical hardware connection.

Ports with numbers 0-1023 reserved for pre-agreed services. These are often called ‘well known’ ports. Examples include:

· Port 7 – Echo. This is the ‘loopback’ port allowing machines to communicate with themselves (useful for e.g. testing).

· Port 110 – POP. This is the Post Office Protocol – one of several used for email.

· Ports 20 and 21 – FTP. These are used by the File Transfer Protocol

· Port 80 – HTTP. This is the Hypertext Transfer Protocol. In fact, it is possible to specify other ports for this – however, this has effectively died out.

Within the well-known ports, those with numbers up to 255 are specifically reserved for ‘public’ protocols, and from 256 to 1023 for commercial software (Doom, for example, gets 666 – though Quake has to live with the not-well-known 26000).

Ports with numbers 1024 and above are not reserved, and can be used for anything. In practice, they are also widely used by well-established services. For example, RMI uses 1099 by default (you can change it though).

2.2.2. Domain Name System

There are many other services and protocols that exist to support TCP/IP based communications (though they do not directly form part of the layered model). We won’t consider most of them in any detail except DNS – Domain Name System. This exists to translate the (reasonably) user-friendly dot-separated word notation we use to identify hosts (e.g. cs-svr1.swan.ac.uk) into IP addresses.

DNS is centered around the resolver, which is a set of C library functions. The most important are gethostbyname() and gethostbyaddr() which (respectively) return an IP address given a hostname and vice-versa.

DNS is essentially a hierarchical distributed database of mappings between host names and host IP addresses. UWS, like many large organisations, runs its own DNS server(s). They will try to resolve host names into IP addresses – if they fail, they will pass the name onto the next-higher machine in the DNS hierarchy, to see if it can resolve the name, and so on, until a match is found (or it is established that none exists). In practice, DNS resolution can be either recursive or iterative. With recursive resolution, if a DNS server cannot resolve an address, it passes the request on to another server. With iterative resolution, the DNS server passes the address of the next ‘higher-up’ server back to the requesting client – and it is the client’s responsibility to query it directly.

To speed up DNS access, servers cache request results. In fact some servers are caching only, and do not have any authority to resolve addresses themselves. Obviously, this has the potential disadvantage that these results can become stale. To guard against this, cached results are marked as unauthorative (essentially meaning you can choose not to believe them) and are given a time to live, after which they expire.

In addition to turning host names into IP addresses, we sometimes wish to do the reverse. In order make this process reasonably efficient, all IP addresses are mapped to a special domain called (for historical reasons) in-addr.arpa. To (for example) find out the hostname corresponding to the IP address 137.44.0.1, you look up 1.0.44.137.in-addr.arpa (note the IP address is reversed). This contains the hostname corresponding to the IP address 137.44.0.1.

(As a mildly-interesting aside, for a number of years the UK decided that its domain names should be the ‘other way round’ – e.g. uk.ac.swan.cs-svr1 – to the rest of the world. This led to lots of ingenious software that worked out when domain names needed to be reorded – which worked OK until you tried to communicate with a machine called, say, es.whatever.uk – is this a machine called ‘es’ in the UK, or a machine called ‘UK’ in Spain? A real example, by the way.)

2.2.3. IP Address Structure

IP addresses in Version 4 are 32 bit numbers typically written a.b.c.e, where each of a-e are 8-bit numbers, typically represented as decimal numbers from 0-255 (the so-called dotted decimal notation). The first part of an IP address is the network address, and the second part is the host address. The dividing point depends on the class of the address. There are five classes – A, B, C, D and E.

· Class A. Class A addresses have an 8-bit network address and a 24-bit host address. This means that class A networks can each have over 16 million hosts – MIT has one, for example. There are few, if any, organisations that can justify using a class A address space (not even MIT), and since 50% of all IP addresses are in class A networks, many are wasted.

· Class B. Class B addresses have a 16-bit network address and a 16-bit host address. Each can have over 65 thousand hosts. 25% of IP addresses are in class B networks, and again many are wasted.

· Class C. Class C addresses have a 24-bit network address and an 8-bit host address. Each can have 254 hosts (not 256 because the 0 is the network address and 255 is the broadcast address, neither of which can be assigned to hosts). 12.5% of IP addresses are in class B networks.

· Classes D and E. Classes D and E are reserved for special purposes, and take up 12.5% of the total.

The University has a class B network (network address 137.44). To see how addresses are wasted, consider that the University obviously has too many hosts for a class C network (254), but also obviously is not efficiently using its class B network. When IP was devised, this was not though to be a serious problem, since the number of potential hosts was small compared with the number of addresses available – they were simply handed out for free. IP version 6 not only expands the address space to 16 bytes, but also gets rid of the wasteful class structure.

(Incidentally, you are likely to be using an address like 192.168.x.x for your local home network if you have one – which is one of a set of ‘special’ addresses reserved for such purposes that cannot cause problems if they ‘leak out’ onto the internet.)

2.2.4. Dynamic Host Configuration Protocol

You may wonder how IP addresses are actually assigned to computers (or more properly network interfaces). In practice, many systems are manually configured: the user (or technical support) enters the data directly. However, it is also common for IP addresses (and other network parameters) to be automatically assigned using Dynamic Host Configuration Protocol or DHCP. A common example is a router connected to a broadband ISP and serving a small LAN. The router receives the IP address it uses to communicate with the ISP via DHCP. It then in turn uses DHCP to assign IP addresses to the computers on the LAN: only this time it is assigning addresses. Typically, these will be in the range 192.168.0.1 to 192.168.0.254. If you have such an arrangement, you might want to check. In such a system computers can receive different IP addresses each time they start up (or when they explicitly renew their connection) which is obviously inconvenient for some systems (mainly servers). There are usually mechanisms to make sure that such systems always get assigned the same IP address. DHCP is capable of substantially more than this: if you are interested, then take a look at this (though be warned it’s not light reading).

2.3. The IP Protocol

IP is a datagram based host-to-host protocol. It transfers data from machine to machine (not application to application), does not guarantee data arrives, and does not guarantee data order. It is concerned with issues like routing (generally based on dynamic routing tables), and the network topology, or structure, is visible.

IP breaks data into packets and appends a header on the front of each, varying in size from 20 to 60 bytes. The (compulsory) 20 bytes contains information like source and target IP addresses, IP version, header and data length, time to live (a count of the number of ‘steps’ a packet has taken – when it reaches zero, the packet is deleted to prevent lost packets circulating ‘forever’), a packet ID, and a checksum (for error control). There is also a mechanism to control fragmentation. Different networks have different maximum transfer units (MTU) – the maximum packet size that a network can handle. For example, the maximum packet size for Ethernet is 1500 bytes but PPP is only 296 bytes. Therefore, when a packet moves from Ethernet to PPP it may need to be fragmented and later reassembled. When packets are fragmented, the header is copied – including the ID – but a fragmentation counter (normally zero) is incremented for each fragment of the original packet.

The optional 40 bytes of header contain various information – including options to control routing (e.g. recording the route a packet takes, or specifying a particular route) and timestamping (combined with route recording, this can help determine the time taken for packets to move through the network).

2.4. The TCP Protocol

TCP uses IP (actually as we have said it doesn’t have to use IP but it pretty-much always does) to implement a reliable virtual circuit connection between ports. That is, data is guaranteed to arrive, and to arrive in order. TCP also puts its own header information onto data packets (and when used in conjunction with IP, the TCP headers will be treated as part of the data by IP – see fig 1).

Fig.1. TCPIP and IP headers

TCP uses a sliding window protocol (which you will be familiar with if you have studied networking). When trying to implement reliable, in-order data communication using sliding windows, there are basically five things that can go wrong.

· Corrupt data. This is handled by including a checksum in the TCP header. Corrupt data is not acknowledged, forcing it to be ultimately resent when the sender does not receive an acknowledgement after a given period of time (that is, it times out).

· Lost Data. This is handled by numbering segments (actually, in TCP, the actual bytes are numbered). If a data segment is lost, it is not acknowledged forcing it to be resent.

· Duplicate data. Also handled by segment numbers – duplicate data is simply ignored.

· Out-of-order data. This is a natural consequence of using the connectionless IP. Different data segments may take different routes and consequently may arrive at the destination out of order. The destination host does not acknowledge an out-of-order segment until all preceeding segments have arrived. The wait may result in the sender timing out, which will cause the data to be resent resulting in duplicate data (see above).

· Lost acknowledgement. It is not only data that can be lost or corrupted. If an acknowledgement is lost or corrupted, the sender will time out and resend the data. This will again result in duplicate data to be discarded.

Like IP, TCP headers are between 20 and 60 bytes in length. The compulsory 20 bytes contains (among other things): the source and destination port numbers; a checksum; a sequence number (the number of the first byte in the segment); an acknowledgement number (allowing acknowledgments of data received by the sender from the receiver to be piggybacked since TCP communication is full-duplex) and other assorted control information. The optional header bytes contain information controlling the sliding window size, the data segment size, and timestamping.

2.4.1. The Problem with TCP

TCP was originally intended for dealing with intrinsically unreliable networks, and for those it works very well. Now, it is almost universally used even for networks that are fundamentally very reliable. For example, Ethernet LANs – a completely unscientific check of the error rate at the machine in my office suggests there are many millions of correct segments for each erroneous one. While TCP obviously ‘works’ in this case, the computational overhead of dealing with errors that do not occur is quite high. As a rule of thumb, you need 1MHz of processor power for each Mbit of data transmitted (that’s bog-standard Pentium MHz – not, say, PowerPC MHz, or Pentium M MHZ). So, Gigabit Ethernet at full speed will use 50% of the processing power of a 2GHz processor (this is obviously a very rough guide). It might be better to adopt a protocol that has less computational overhead in normal circumstances (accepting a significantly longer delay when the – very rare – errors do occur). However, this is unlikely to happen and using a standard protocol in almost all circumstances has clear advantages.

2.5. Related Protocols

In addition to TCP, IP, DHCP and DNS, there are a number of other protocols of interest. Here is a by no means exhaustive list.

· UDP. The User Datagram Protocol is a simple datagram-based alternative to TCP. Unlike TCP, it does not guarantee that data arrives in order – or at all. This might not seem useful, but if you have studied some networking, you will know that datagram protocols are often chosen by sophisticated users. (For example, if you do not trust the security and safeguards in TCP, and intend to implement your own, there is not much point in accepting the overhead of TCP.)

· ARP. The Address Resolution Protocol converts IP addresses into physical address. This enables the hardware associated with an IP address to be identified (which is necessary for data to be delivered).

· RARP. The Reverse Address Resolution Protocol (obviously) does the opposite of ARP. This is needed by, for instance, diskless nodes (thin client systems) at startup which cannot store their own IP addresses, but can access the hardwired physical address.

· SNMP. The Simple Network Management Protocol is a UDP-based network monitoring and maintenance protocol.

· FTP and TFTP. The File Transfer Protocol is used to transfer files from host to host. FTP uses two ports – 20 for data transfer and 21 for managing transfers. FTP is a quite complex protocol, and imposes a significant overhead. Sometimes this is excessive, and the Trivial File Transfer Protocol is a simpler alternative. It is more usual to use the Secure File Transfer Protocol in this more security-aware age.

2.6. Example: What Happens When You Access a Web Page?

The browser analyses the URL – Uniform Resource Locator – or Universal Resource Locator. Typically, it will be of the form

The port is nearly always omitted and assumed to be 80, and other protocols, or schemes are possible, like ‘FTP’, ‘file’, ‘SMB’ (the Windows networking protocol), ‘AFP’ (AppleTalk) instead of ‘HTTP’.

Assuming it is HTTP, the host name is then resolved by using DNS, and a connection is set up to the host’s IP address using the Hypertext Transfer Protocol on, usually, port 80. A request for data is then sent over this connection. This minimally might just be something like

GET /filepath.html HTTP/1.1

followed by a blank line, which says ‘get me filepath.html and I want to communicate using protocol HTTP version 1.1’. (The blank line – or more strictly, two consecutive line feeds is required to indicate to the server that the request is finished.) Typically, the request will be more complex, and will indicate things like what file types the browser can handle (by specifying their MIME types – Multipurpose Internet Mail Extension or Multimedia, depending on who you believe), and information such as what browser is being used.

The typical response from the server, assuming it found file.html, is to send back something like

HTTP/1.1 200 OK

followed by assorted other stuff (e.g. the date, the type of server, the MIME type and length of the content) followed by another blank line, and then the requested HTML file. If the HTML file contains other files, these are returned in the same format. It is up to your browser (or whatever other software is making the request – there is no requirement that it is a web browser – e.g. a search engine’s web crawler) to assemble multiple files into a coherent ‘web page’. The ‘200’ is the status code, and means ‘OK’. If file.html wasn’t found, you would get something like

404 NOT FOUND

instead, which you’ve probably seen. Codes beginning with 2 mean success; 3 means redirection; 4 means addressing error; and 5 means server error.

This entire communcation process is implemented using TCP/IP: The GET request from the browser is passed to the Transport (TCP) layer, where it is packaged by surrounding it with a header (containing amoung other things the IP address of the destination) and a trailer, and may be subdivided into smaller data components. This is then in turn passed to the Network (IP) layer, which adds further headers and trailers (and may further subdivide the data), and attempts to move the request closer to its destination, node by node. The actual process of communciation is managed by low-level software and hardware ofthe Link layer. When the data reaches its destination, it progresses up the hierarchy of layers, headers and trailers are removed at each stage, and data reassembled (if it has been subdivided) until the original text arrives at the server. Any response is treated in exactly the same way.

As an experiment, try connecting to a web server with telnet:

telnet

and manually retrieving a file by typing:

GET /~csneal/teaching/teaching.php HTTP/1.0

You should see the ‘raw’ HTML page (together with the page headers) – assuming you picked a file that exists.

In its most basic form then, a web server is very simple. In practice, there are of course assorted complexities (which is why 400-page books get published on how to set up and manage them). These complexities centre around: handling a wide range of data formats; efficiency; speed (not quite the same as efficiency); and security.

Obviously, any system in which you can only request static files is extremely limited in the level of interaction that is possible. However, there is no reason why the files return have to be static: they could be generated on the fly, provided the server sends back data in a form that browsers can understand. This is essentially the basis of the companion module to this CS-348.

2.7. URLs and URIs…

As a slightly interesting aside, you may have wondered what the difference between URLs (Uniform Resource Locators) and URIs (Uniform Resource Identifiers) – not to mention URNs (Uniform Resource Names).A URL is the ‘full’ name including the scheme (e.g. http) but without a page location , or ‘anchor’. A URN is the name without the scheme but with a page location/anchor if one exists. And a URI is a URL with a page location/anchor, if one exists. So in many cases, URLs and URIs are actually identical.

www.cs.swan.ac.uk 80http://hostname/filepath:port

admin has written 102 articles