Table of Contents
Cover
About the Author
Foreword
Introduction
How This Book Is Organized
Conventions Used in the Book
Audience
Feedback Is Welcome
The Alcatel-Lucent Service Routing Certification Program Overview
Alcatel-Lucent Scalable IP Networks Exam (4A0-100)
Standard Icons
1 Introduction To Networking
Pre-Assessment
1.1: Before the Internet
1.2: Service Providers and Content Providers
1.3: Modern Internet Service Providers
1.4: Overview of TCP/IP
Chapter Review
Post-Assessment
2 The Alcatel-Lucent 7750 SR and 7450 ESS Components and the Command-Line Interface
Pre-Assessment
2.1 The Alcatel-Lucent 7750 Service Router Family
2.2 Command-Line Interface
Practice Lab: Alcatel-Lucent 7750/7450 Hardware and the CLI
Chapter Review
Post-Assessment
3 Data Link Overview
Pre-Assessment
3.1 OSI Layer 2 Overview
3.2 Layer 2 Protocols: PPP, ATM, and Time Division Multiplexing
3.3 Ethernet Overview
Practice Lab: Configuring IOMs, MDAs, and Ports
Chapter Review
Post-Assessment
4 Switched Networks, Spanning Tree, and VLANs
Pre-Assessment
4.1 Ethernet Devices: Hubs and Switches
4.2 Ethernet Switching Operations
4.3 Ethernet Link Redundancy: LAG
4.4 Ethernet Path Redundancy: STP
4.5 Virtual LANs
Chapter Review
Post-Assessment
5 IP Addressing
Pre-Assessment
5.1 Interconnecting Networks
5.2 The IP Header
5.3 IP Addressing
5.4 IP Subnetting
5.5 CIDR and Route Aggregation—The End of Classful IP Addressing
Practice Lab: IP Addressing and Routing
Chapter Review
Post-Assessment
6 IP Forwarding and Services
Pre-Assessment
6.1 The IP Forwarding Process
6.2 Typical IP Configurations
6.3 Additional IP-Related Services
6.4 IP Filtering
Practice Lab: ICMP and ARP
Chapter Review
Post-Assessment
7 Transport Layer Services—TCP and UDP
Pre-Assessment
7.1 Understanding the Transport Layer
7.2 Transport Control Protocol (TCP)
7.3 User Datagram Protocol (UDP)
7.4 Port Numbers and Sockets
Chapter Review
Post-Assessment
8 Introduction to IP Routing
Pre-Assessment
8.1 IP Routing Concepts and Purposes
8.2 Static and Default Routes
8.3 Dynamic Routing Protocols
Practice Lab: Introduction to IP Routing
Chapter Review
Post-Assessment
9 OSPF
Pre-Assessment
9.1 Introduction to OSPF
9.2 Router IDs and Their Function
9.3 Link State Updates and Flooding
Practice Lab: Open Shortest Path First (OSPF)
Chapter Review
Post-Assessment
10 BGP
Pre-Assessment
10.1 Interior and Exterior Gateway Protocols
10.2 Autonomous Systems
10.3 History and Features of BGP
10.4 BGP Metrics
10.5 When to Use BGP
10.6 Packet Details and Operation
10.7 BGP Case Studies
Practice Lab: BGP Routing
Chapter Review
Post-Assessment
11 MPLS and VPN Services
Pre-Assessment
11.1 Services Overview
11.2 MPLS in Detail
11.3 VPN Services in Detail
Practice Lab: Services
Chapter Review
Post-Assessment
A: Chapter Assessment Questions and Answers
Assessment Questions
Answers to Assessment Questions
B: Lab Exercises and Solutions
Chapter 2: Alcatel-Lucent 7750/7450 Hardware and the Command Line Interface
Chapter 3: Configuring IOMs, MDAs, and Ports
Chapter 5: IP Addressing and Routing
Chapter 6: ICMP and ARP
Chapter 8: Introduction to IP Routing
Chapter 9: Open Shortest Path First (OSPF)
Chapter 10: BGP Routing
Chapter 11: Services
Solutions
Glossary
Index
Wiley Publishing, Inc. End-User License Agreement
Service Routing Must Reads from Alcatel-Lucent
End User License Agreement
List of Tables
2 The Alcatel-Lucent 7750 SR and 7450 ESS Components and the Command-Line Interface
Table 2.1 Differences between the Alcatel-Lucent 7750 SR Series and 7450 ESS Series
Table 2.2 Basic Navigation Commands
Table 2.3 Common Global Commands
Table 2.4 Sample CLI Environment Commands
4 Switched Networks, Spanning Tree, and VLANs
Table 4.1 Default Path Costs
8 Introduction to IP Routing
Table 8.1 Default Preference Table
Table 8.2 Distance Vector versus Link State
List of Illustrations
1 Introduction To Networking
Figure 1.1 The original ARPA network had only four nodes.
Figure 1.2 ISPs exchange data through an IXP.
Figure 1.3 An IXP and a variety of tier providers connect regional offices.
Figure 1.4 A single content provider serves content to multiple locations.
Figure 1.5 Content goes through an IXP and is forwarded to various ISP POPs.
Figure 1.6 Regional Internet Registry agents allocate IP addresses.
Figure 1.7 The Internet Protocol suite is constructed around four layers.
Figure 1.8 Example applications for the TCP/IP layers.
Figure 1.9 A WWW application creates a message that includes the sender and recipient information in the message header and the contents of the message in the message body.
Figure 1.10 The transport layer specifies the TCP source and destination port numbers that will be used by the upper layer application.
Figure 1.11 The network layer adds source and destination IP addresses so that the packet can be forwarded through the network.
Figure 1.12 The data link layer adds source and destination MAC addresses for forwarding on the local network segments.
Figure 1.13 Data from applications is sent to the TCP/IP protocol stack where all the appropriate headers are added and the packet is sent on to the network for forwarding to its destination. As the packet travels through the network, the Layer 2 information is changed at each router, but the network, transport, and application information remain unchanged.
Figure 1.14 The OSI reference model defines seven distinct layers.
Figure 1.15 The TCP/IP layers do not map exactly to the OSI layers; multiple OSI layers are performed by a single TCP/IP layer.
2 The Alcatel-Lucent 7750 SR and 7450 ESS Components and the Command-Line Interface
Figure 2.1 The Alcatel-Lucent 7750 SR-12. Two slots are dedicated for SF/CPM control cards; the other 10 slots are available for I/O Modules that provide network interfaces.
Figure 2.2 The Alcatel-Lucent 7750 SR-7. Two slots are dedicated for SF/CPM control cards; the other five slots are available for I/O Modules that provide network interfaces.
Figure 2.3 The Alcatel-Lucent 7750 SR-1. The SF/CPM and one IOM base board are integrated into a fixed form chassis. The IOM base board can accommodate two Media Dependent Adapters (MDAs) for physical interfaces.
Figure 2.4 The Alcatel-Lucent 7450 ESS-12. Two slots are dedicated for SF/CPM control cards; the other 10 slots are available for I/O Modules that provide network interfaces.
Figure 2.5 The Alcatel-Lucent 7450 ESS-7 and ESS-6. Two slots are dedicated for SF/CPM control cards for both models. The other slots are available for I/O Modules that provide network interfaces.
Figure 2.6 The control plane and data plane functions use the same MDAs, but control packets are processed by the SF/CPM modules. Data packets are switched from the ingress IOM to the egress IOM without any handling by the control processor module (CPM).
Figure 2.7 SF/CPM cards plug into the chassis to provide intelligent data processing and forwarding.
Figure 2.8 Small Form Factor Pluggable (SFP) transceivers are small optical modules that plug into MDAs. MDAs themselves are add-on modules to IOM cards that provide interfaces for the Alcatel-Lucent 7750 SR and 7450 ESS.
Figure 2.9 A packet ingresses the Alcatel-Lucent 7750 SR/7450 ESS through the MDA from an attached network. It is forwarded through the Flexible Fast Path complex to the switch fabric or the control plane.
Figure 2.10 A packet egressing the Alcatel-Lucent 7750 SR/7450 ESS is sent from the switch fabric to the Flexible Fast Path complex for processing. The data is then framed by the MDA and sent out to the network.
Figure 2.11 The layout of the system files in the cf3 card directory structure.
Figure 2.12 The initialization process for the Alcatel-Lucent 7750 SR and 7450 ESS series.
Figure 2.13 The CLI prompt provides key information such as the host name and current context.
Figure 2.14 This figure shows the CPM serial console port and the CPM out-of-band Ethernet management port.
Figure 2.15 The relationship between log event sources, the log ID filter, the log IDs, and the log ID destinations. Note that the log ID filter policy is optional but recommended. Note also that the log ID destination options for the console and syslog are not shown.
3 Data Link Overview
Figure 3.1 Hosts on the Ethernet network communicate with each other using the Ethernet protocol, and hosts on the ATM network communicate using the ATM protocol. A router is required for hosts using different Layer 2 protocols to communicate with each other.
Figure 3.2 A PC using a modem to connect to the Internet or any other dial-up network would use the PPP protocol.
Figure 3.3 The PPP frame header. The address and control fields are not used by PPP and are always set to default values.
Figure 3.4 ATM is an example of a circuit-switching protocol. Multiple logical circuits can exist on the same physical link.
Figure 3.5 The ATM cell format. Note the use of the Virtual Path (VPI) and Virtual Channel (VCI) identifiers to support virtual circuits over a single physical link.
Figure 3.6 The AAL5 header. AAL5 is the adaptation layer that is used by IP services.
Figure 3.7 Using a TDM circuit, each PC gets a fixed timeslot for its traffic.
Figure 3.8 The frame structure for a DS-1 and a European E1 signal.
Figure 3.9 Some examples of shared media technologies where every station receives the same information simultaneously.
Figure 3.10 The two Ethernet frame standards: 802.3 and Ethernet II.
Figure 3.11 A general Ethernet frame and its relevant fields.
Figure 3.12 An Ethernet frame captured using a packet sniffer. Relevant fields are highlighted.
Figure 3.13 The LLC and MAC information are sub-layers of the data link layer of the OSI model.
Figure 3.14 The format of an Ethernet MAC address.
Figure 3.15 Ethernet stations use unique MAC addresses to communicate with each other.
Figure 3.16 The Ethernet destination MAC address ff:ff:ff:ff:ff:ff is the broadcast address which means “all hosts”.
Figure 3.17 An Ethernet multicast MAC address is used to send a single Ethernet frame to multiple, but not all, hosts.
Figure 3.18 All hosts listen to the Ethernet media to detect a transmission. Once Host A starts transmitting, then other stations will detect it and will not attempt to transmit.
Figure 3.19 Host A and Host B may start to transmit simultaneously, resulting in a collision.
Figure 3.20 Ethernet switches provide a dedicated, full-duplex link to each station and eliminate collisions.
Figure 3.21 Physical specifications for all Ethernet standards.
4 Switched Networks, Spanning Tree, and VLANs
Figure 4.1 Hubs and repeaters simply replicate and amplify the frames sent from each device; they do not inspect L2 headers and do not perform an intelligent forwarding. They provide only half-duplex operation.
Figure 4.2 (switch drawing only) Switches inspect L2 headers and will forward frames only to the specific port that has the destination address in the frame. They provide full-duplex operation.
Figure 4.3 Switches build up their FDB table by recording the source address of frames as they enter each port on the switch.
Figure 4.4 When a switch receives a frame with a destination address that is not in its FDB, it will flood the frame out each port. When the destination host responds, the switch records the reply frame’s source MAC address in its FDB for future use.
Figure 4.5 Hubs provide no separation for collision or broadcast domains, switches provide collision domain separation, and routers provide both collision and broadcast domain separation.
Figure 4.6 Can you identify the collision and broadcast domains?
Figure 4.7 If dynamic cost is configured on the entire bundle, then the group will change its OSPF cost when a link fails, regardless of the port-threshold value as in LAG 1. Dynamic cost can also be configured to modify the cost only when the port threshold is reached as in LAG 2.
Figure 4.8 Frames sent from Host A to Host B are forwarded from Segment 1 to Segment 2 by Switch 2, which is then forwarded from Segment 2 to Segment 1 by Switch 1 and then re-forwarded by Switch 2. This process continues ad infinitum.
Figure 4.9 A further illustration of a loop created in a switched network when there are multiple active paths. The frame from Host A to Host B will continually circulate around the network.
Figure 4.10 STP will block the ports between Switches C and E, ensuring a loop-free topology in the switched network.
Figure 4.11 The root bridge is selected based on the bridge priority of 16, which is lower than the priority of the other bridges.
Figure 4.12 The port is blocked because the path through that link to the root bridge is higher than the other path to the root bridge.
Figure 4.13 The current version of STP provides for very fast convergence in response to a switch/link failure by keeping track of ports that provide alternative paths to the root bridge. The back-up port is an alternative port on the same bridge, while the alternate port provides an alternative path on a different switch.
Figure 4.14 As broadcasts increase on a flat network, they quickly consume all available network resources.
Figure 4.15 VLANs provide for logical separation of devices on the same physical switch.
Figure 4.16 Only hosts in common VLANs can communicate with each other. Switch 1 will keep a separate FDB for both VLAN 101 and VLAN 102.
Figure 4.17 VLANs can be created across multiple switches. In this case, there is a separate physical interswitch link for each VLAN.
Figure 4.18 In this case, there is a single VLAN trunk port between the switches that carries traffic for all VLANs by tagging the frames with the correct VID on egress to the other switch.
Figure 4.19 VLAN tags are incorporated into the standard Ethernet frame through the addition of a VLAN tag field.
Figure 4.20 Service providers can “stack” VLAN ID information in order to support multiple customers with overlapping VLANs over the same provider backbone.
Figure 4.21 Ethernet frames can support VLAN stacking by adding an additional VLAN tag to the standard frame.
Figure 4.22 This figure illustrates the answer to the question posted by Figure 4.6.
5 IP Addressing
Figure 5.1 There are end-hosts on ATM that need to communicate with hosts on Ethernet over a POS backbone. This situation requires IP routers to transfer the information from one L2 network to another.
Figure 5.2 An IP packet header. This header is for version 4 of IP; IP version 6 has a different header, but it is not yet in widespread use.
Figure 5.3 Routers represent IP addresses internally in binary format, while humans find it more convenient to use the decimal equivalent. Using the decimal version is fine, but there are operations that are performed on IP addresses that are best understood using binary.
Figure 5.4 With class-based addressing, IP addresses have two parts: a network number or prefix and a host number.
Figure 5.5 With class-based addressing, the first 3 bits of the first octet of the address determine how many bits were allocated for the network and how many for the host.
Figure 5.6 An example of five networks using different classes of addressing. Under classful IP addressing, the network portion and the host portion of the address are known based on the first octet of each address, that is, 192, 10, 172.
Figure 5.7 Unicast packets are delivered to a single host, usually along a single path through the network.
Figure 5.8 A broadcast packet sent on a network segment will be sent to every host.
Figure 5.9 Routers and switches can listen for multicast updates and build forwarding tables based on this information. In this way, multicasts can be forwarded to only those network segments and hosts that are members of a multicast group. In the figure, only Hosts 1, 6, and 3 will receive the multicast packets from the source.
Figure 5.10 The use of subnetting allows you to “borrow” bits from the host portion of the classful address and use the data to create subnets of the primary network. Each of the subnets can contain a smaller number of hosts since there are fewer host bits available.
Figure 5.11 A subnet mask is used to determine the host portion of an address. In this figure, there are 25 bits in the subnet mask, indicating that bits 26 through 32 (7 bits) are available for hosts. 192.168.2.0 would normally be a Class C network, but the use of the subnet mask indicates 1 bit (the 25
th
bit) is borrowed from the host portion to create two additional subnets: 192.168.2.0 and 192.168.2.128.
Figure 5.12 Using the subnet mask of 25 bits allows you to split the 192.168.2.0 network into two subnets that support 7 bits of host addressing instead of one network that supports 8 bits of host addressing. This is the essence of subnetting: It allows you to use portions of the address that would normally be used for host addresses to create additional subnetworks.
Figure 5.13 Using a /27 means 27 bits are used for the subnet mask. This leaves 5 bits for the hosts on each subnet. Notice that it is easy to see what the hosts are when using binary, but it can be confusing trying to determine it in decimal.
Figure 5.14 When planning a subnet design, you need to know how many subnets are required now and plan for a little future growth. The requirement here is for nine subnets, which means that at least 4 bits are needed because using 3 host bits would only yield eight subnets. You also need to know how many hosts will be needed on each subnet.
Figure 5.15 The goal is to subnet IP network 192.168.1.0/24 into at least 6 subnets, and ensure that each subnet has at least 20 host IP addresses.
Figure 5.16 Most IP addresses are associated to a physical interface on a router. However, there is also an internal “system” address that is not associated with any single physical interface.
Figure 5.17 Router A has IP addresses on two physical interfaces and two logical interfaces.
Figure 5.18 There are five networks that need IP address assignments. We need to create subnets from a common IP network address for all these networks.
Figure 5.19 Using 27 bits for the subnet mask, we could create eight subnets that can each support 30 hosts. However, 30 hosts may be too few for some subnets, and too many for others.
Figure 5.20 There are five networks that each require 2,000 hosts.
Figure 5.21 Subnet 3 has been further divided into six smaller subnets by using additional host bits. Notice that the smaller subnets do not need to have the same subnet mask.
Figure 5.22 The requirement is to subnet 138.120.0.0/16 into three subnets and then to further subnet one of those three into three additional subnets.
Figure 5.23 The requirement is to create six subnets from 10.10.10.0/24. Each subnet must support the indicated number of hosts.
Figure 5.24 There are 256 routes in the routing table on Router A. Instead of advertising all 256 to Router B, it is better to summarize the subnets as a single aggregate route of 10.10.0.0/16 to reduce the routing table size and increase routing advertisement stability.
Figure 5.25 Draw a network line on the original network bit boundary (/24 in this case). This becomes the right bit boundary. Then draw a line where all the bits are common between the subnets (/21 in this case); this is the left bit boundary. Examine the subnets created by using the bits in between the lines. If all the subnets created by those bits are part of the range you want to summarize, then you can use the left bit boundary (/21 in this case) as a summary route.
Figure 5.26 What route or routes will be advertised to Router 2?
Figure 5.27 We would like to use common line /25, but there are bit patterns for subnets that are not part of the original six subnets. This means that we cannot aggregate all of these subnets into a single CIDR advertisement.
Figure 5.28 VLSM is used within an organization, whereas CIDR is used for aggregate routes that are advertised on the Internet.
Figure 5.29 The enterprise is given the 100.1.0.0/23 address block by its ISP and then splits the block into multiple subnets. The ISP does not know or need to know how VLSM is used inside the enterprise.
Figure 5.30 Here the ISP is providing multiple IP address blocks to the customer directly. In this case, ISP 1 would advertise a CIDR block to ISP 2.
Figure 5.31 ISP 1 will advertise a CIDR route for the 100.1 blocks and ISP 2 will advertise a CIDR route for the 101.1 blocks.
Figure 5.32 A simplified diagram shows how an ISP connects to multiple sites of two different enterprises.
Figure 5.33 The Lab exercise uses this topology to demonstrate a simple IP addressing scheme.
6 IP Forwarding and Services
Figure 6.1 A layered view of the IP forwarding process. At the physical layer, there are simply electrical signals translated as bits. Layer 2 constructs these bits into an L2 frame, checks the frame length, and performs an FCS. The contents of the frame are read and sent to the appropriate upper layer, usually IP. At the IP layer, the TTL and other IP header fields are checked, the forwarding table is consulted, and the IP packet is forwarded to the appropriate router egress interface.
Figure 6.2 A traditional home network setup. A PC and a laptop connect to a Layer 2 switch, which is, in turn, connected to an ISP router. The customer owns the PC, laptop, and L2 switch, while the provider owns and provides the router.
Figure 6.3 A modern home network that supports advanced user services such as IP telephony, Video on Demand (VoD), and interactive gaming over a high-speed Digital Subscriber Link (DSL) or cable Internet.
Figure 6.4 NAT uses a pool of available public IP addresses that are mapped to private IP addresses when computers need to access the Internet. The IP address is one-to-one, and there are no changes to upper-layer protocols in the packets.
Figure 6.5 PAT uses a single public IP address that is mapped to many private IP addresses when computers need to access the Internet. PAT maintains a list of upper-layer ports and will alter the source port numbers if there is a duplication.
Figure 6.6 All of the home network devices have their own private IP addresses in the 192.168.10.0/24 range. The router will use a PAT table to keep track of each address and port translation that occurs.
Figure 6.7 The router needs a way to determine what public IP address it needs to use. DHCP is the protocol that allows the router to get a dynamically assigned IP address.
Figure 6.8 The home router broadcasts a DISCOVER request, which is then answered by the provider DHCP server with an OFFER of an IP address. The home router then broadcasts a REQUEST with the IP address of the server that provided it with the offer. The provider DCHP server sends an ACK message to the home router, indicating that it can start using the IP address originally sent in the OFFER.
Figure 6.9 Host A can determine if Host B is on the network and processing IP datagrams by sending an echo request command to Host B’s IP address. The echo request is routed through the network to Host B, which then sends an echo reply message back to Host A’s IP address. The ping application is the most common method of checking for host availability using echo request/echo reply.
Figure 6.10 Host A is attempting to send packets to Host B, but Host B is no longer available. Without an ICMP message, it will be up to Host A’s upper-layer protocols to time out the connection. As an alternative, if the network link to Host B is down, a router can send an ICMP
destination unreachable
message to Host A’s IP process so that it can inform the upper-layer protocols.
Figure 6.11 Host 1 issues a broadcast ARP request asking for Host 2’s MAC address. Host 2 sees the request and sends a unicast ARP reply back to Host 1 with its MAC address.
Figure 6.12 In order to reduce the number of ARP requests, IP hosts maintain an ARP cache that contains the answer to previous ARP requests. The entries in the ARP cache are saved for a limited amount of time and then discarded.
Figure 6.13 Host 1 wants to send an IP packet to Host 7, but Host 7 is not on its local network. Host 1 determines, based on its IP subnet mask, that Host 7 is on a distant network and issues an ARP request for its default gateway. The router responds with its MAC address, and then Host 1 sends the ICMP echo request to the router for forwarding.
Figure 6.14 Incoming packets are compared against each filter entry in the filter policy. If there is a match, then the filter entry action is applied. If the entry is not a match, then each successive entry is checked until a match is found or the router reaches the end of the entries. If the end of the list is reached without a match, the default action is taken, or, if no default action is specified, the packet is dropped.
Figure 6.15 The process used to create a filter policy. The filter must be created with its associated scope, description, and default action. Then, individual filter entries need to be created to specify what criteria the filter will examine in the IP packets. Finally, the policy must be applied to an interface or SAP.
7 Transport Layer Services—TCP and UDP
Figure 7.1 The application data is passed to the TCP layer, where it is divided into TCP segments and a TCP header is added to each segment. Each TCP segment is then passed to the IP layer, where an IP header is added before transmission onto the underlying data link layer.
Figure 7.2 TCP uses port numbers to uniquely identify upper-layer applications and will pass the data it receives to the correct application on each end of a reliable connection. TCP makes use of the underlying unreliable IP layer to transfer data between Host A and Host B. If there are packet drops or packets arrive out of order, these issues will be handled by TCP. This will be transparent to the application layer above it and the network layer below it.
Figure 7.3 The fields in a TCP header. Note that there are many short fields with three-letter acronyms such as urg, ack, psh, and so on. These are known as
TCP flags
, and it is very important that you understand what these flags are for.
Figure 7.4 The TCP three-way handshake. Host A sends a SYN request with a starting sequence number, an acknowledgement number set to zero, and the SYN bit set. Host B acknowledges this by sending a response with its own sequence number, an acknowledgement number equal to Host A’s sequence number plus 1, and the SYN and ACK bits set. Host A then responds with a new sequence number equal to its initial sequence number plus 1, an acknowledgement number equal to Host B’s sequence number plus 1, and the ACK bit set. Once these three packets are exchanged, the handshake is complete and communication can begin.
Figure 7.5 TCP sends SEQ number 27000, which is acknowledged by the receiver, and the receiver indicates it expects to receive 27500 next. The sender then sends two segments with numbers 27500 and 28000, but 27500 is lost. The receiver does not acknowledge that it received 28000 because this would indicate to the sender that it received both 27500 and 28000. Instead, the receiver repeats the acknowledgement, indicating that it is still expecting 27500, which is how the sender recognizes that 27500 was lost and that it needs to re-send. Once 27500 is received, the receiver requests segment 29000 since it has already received 28000 and 28500.
Figure 7.6 The sender is transmitting data faster than the host can receive it. The first advertised window size is 5,000 in the first acknowledgment. After additional data is received, the receiver reduces its window size to 2,000 in the next acknowledgment. After still more data is received, the receiver reduces its window size to 0, effectively halting the sender from receiving any additional data until the receiver's buffer is cleared and it sends a non-zero window size to the sender. At this point, the sender can resume its transmission.
Figure 7.7 A short but complete TCP conversation from start to finish. The three-way handshake starts the TCP conversation, data is transferred with sequence and acknowledgement numbers, and the window size fluctuates slightly. The conversation is closed at the end with FIN and FIN + ACK bits set. Wnd=the advertised TCP Window and LEN=the length of the data in the TCP segment.
Figure 7.8 A typical TCP transfer of data, where the sender is using a congestion window (cwnd) to control the rate at which it is sending. The cwnd is increased until it exceeds the receiver’s advertised window or congestion is detected. When congestion is detected, the receiver reduces its cwnd accordingly, and then the process starts over.
Figure 7.9 The UDP header contains only port, length, and checksum fields. It does not have many of the fields that the TCP header has because UDP is a much simpler transport protocol.
Figure 7.10 UDP uses port numbers to multiplex applications just as with TCP. Some of the more common UDP applications are DNS, DHCP, and TFTP.
Figure 7.11 Both TCP and UDP use port numbers to multiplex and de-multiplex multiple applications. Some of the more common applications for both transport protocols are listed in the figure.
Figure 7.12 Multiple instances of the same application can be initiated between the same hosts because TCP and UDP use both source and destination ports to uniquely identify each session. The destination port is the same for each application session, but the source ports for each session are unique.
8 Introduction to IP Routing
Figure 8.1 Routing protocols can be divided into IGPs and EGPs. IGPs can be further divided into distance vector and link state protocols.
Figure 8.2 The basic process of IP forwarding. Each router examines an ingress packet’s destination address and searches for a match in its routing table. If a matching route is found, the packet is forwarded out of the designated interface. Each router will reframe the L2 header but leave the IP and upper layers unchanged.
Figure 8.3 Distance vector routing protocols operate by having each router send information about its directly connected networks to its neighbors. Routers R2 and R3 will send information about their directly connected networks to R1 so that it can build its routing table. R1 would then send its own independent updates of the information in its routing table to any router on Network A. A distance vector routing protocol would never send updates directly from R2 and R3 to routers on Network A.
Figure 8.4 Routers R2 and R1 send routing updates to R1. R1 uses this information to build a protocol-independent RIB that contains all available routes from all sources. R1 then uses all of this information from different routing sources, including its local interfaces, to build its routing table.
Figure 8.5 If there are changes to the routing information on R2 and R3, they will send new updates to R1. R1 will enter this new information into the RIB and then recalculate a new routing table.
Figure 8.6 R1 will scan the RIB and calculate the best route to each network prefix. In this case, it chooses the best route based on the metric. Note that R1’s routing table has only a single entry for each network prefix, while the RIB has multiple entries for many of the prefixes.
Figure 8.7 An IP packet arrives at the ingress to Router R1. R1 looks at the destination IP address and searches for a matching entry in its routing table. R1 finds a match and forwards the packet out the appropriate interface to the next hop indicated in its routing table (R3, in this case). R3 follows the same process as R1, determines that the route is local, and forwards the packet to its local network.
Figure 8.8 The control plane function consists of routing protocol updates that are exchanged only between routers to build the routing table. The data plane function consists of the procedures to forward IP packets using the information contained in routing tables.
Figure 8.9 R1 needs to forward an IP packet to network 172.16.2.0/24. It examines its routing table and determines that R3 is the next hop. Since R3 and R1 share a common Ethernet segment, R1 will issue an ARP request for the MAC address of R3’s IP address or retrieve the MAC address from its ARP table if an entry already exists for R3. Once R1 has the MAC address of R3, it will use this address to create the L2 header for the IP packet and forward the frame using the Ethernet protocol.
Figure 8.10 Router R1 receives updates about Network B prefix 172.16.9.0/24 from both the OSPF and RIP routing protocols. Metrics from different routing protocols are not directly comparable, so R1 must use a priority mechanism to determine which routing update to enter in its routing table. The Alcatel-Lucent 7750 router prefers OSPF over RIP by default.
Figure 8.11 A router maintains an RIB for each routing protocol. If there are identical network prefixes in multiple routing protocol RIB’s, the Routing Table Manager uses a protocol hierarchy to determine which routes to place in the routing table.
Figure 8.12 The Routing Table Manager examines matching network prefixes from each routing protocol RIB and chooses the one with the lowest preference value to place into the forwarding information base (FIB/routing table).
Figure 8.13 There is only one path to reach Remote site 1 and Remote site 2. A static route can be configured on R1 to send all traffic for 192.168.1.0/24 to CR1, and a static route can be configured on R5 to send all traffic for 172.16.0.0/24 to CR2.
Figure 8.14 Router CR1 can be configured with a default route to send all packets for all destinations to R1 since the only path to all networks from Remote site 1 is through R1. The command to configure the default route on CR1 is shown in Listing 8.2.
Figure 8.15 RTR-B receives a complete routing table from RTR-A. RTR-B uses this information to recalculate its routing table and then sends its complete routing table to RTR-D. Note that RTR-B sends its own routing table to RTR-D; it does not forward RTR-A’s routing table to RTR-D.
Figure 8.16 A router running a distance vector routing protocol receives updates from its neighbors, processes the update and recalculates its own routing table, and then sends updates about its routing table to its other neighbors.
Figure 8.17 RTR-A keeps a database of links and how those links form a path to each network. RTR-A will be able to determine from the updates it receives that it can reach network 2.2.2.0/24 via interface 1/1/1 through RTR-C and then on to RTR-B, but it will know that it can also reach the network through interface 1/1/2 to RTR-B directly, and this will likely be the lower cost and preferred path.
Figure 8.18 When link state protocols are used, each router keeps a database of each of its links and the associated cost for each link. The cost is usually based on a default value that is a function of the speed of the interface.
Figure 8.19 Each router using a link state protocol will maintain a link state database that contains all the information for all links in the network. Note that each router should share a common view of the links in the network because each LSP is flooded to every router. Each router independently runs the Shortest Path First algorithm based on the link state database to calculate its individual route to each destination.
Figure 8.20 When a link state router detects a topology change, it floods the new LSP information to other link state routers. Each router records the new LSP in its link state database and then each independently runs the Shortest Path First algorithm to update its routing table.
Figure 8.21 Three different types of static routes are used from CE to PE and from PE to PE in this lab exercise.
9 OSPF
Figure 9.1 Each OSPF router must have a router ID to uniquely identify it. Router R1 has a router ID of 1.1.1.1, and Router R2 has a router ID of 2.2.2.2.
Figure 9.2 In order to exchange link state information with each other, neighboring OSPF routers must build an adjacency. OSPF routers that are connected to each other on point-to-point links always form adjacencies. The adjacency process begins once each router sees its own router ID in the Hello packets from the other router. Once routers become adjacent, they exchange full link state tables with each other.
Figure 9.3 An OSPF Hello packet. The packet must include the router ID, area ID, and Hello timers. The Hello timers include the interval at which Hellos are sent (Hello interval) and the interval that OSPF will wait without receiving a Hello from an adjacent neighbor to declare that neighbor down (dead interval). If authentication is used, then a password will also be included. There is a priority and a DR/BDR field, but these are not used for point-to-point configurations.
Figure 9.4 Routers R1 and R2 have been rebooted and need to form an adjacency. They are initially in the OSPF
down
state. The routers begin sending OSPF Hello packets and proceed to the OSPF
init
state. Once the routers see their own router ID in a neighbor’s Hello updates, they move to the two-way state and are ready to begin an exchange of link state database information.
Figure 9.5 After the routers have discovered each other, they move from the two-way state to the exchange state. The routers exchange OSPF router IDs to determine which router is the master and which the slave. The router with the highest router ID is chosen as the master, and the slave sends the master a summary of the networks it has. The master then sends the slave a summary of the networks it is aware of. Once this process is completed, both routers have a summary of the other router’s routing information.
Figure 9.6 After OSPF routers complete the
exchange
state, they move to the
loading
state. In the
loading
state, the routers go through a series of request–reply–acknowledge steps to request information on specific LSAs. Once this process is complete the OSPF is in a full state—both routers have an identical link state database.
Figure 9.7 A sample network topology. For the examples that follow, only routers R1, R2, R3, R5, and R7 have been enabled for OSPF.
Figure 9.8 R2 will flood information on its LSAs whenever there is a topology change or every 30 minutes. R2 begins the flooding process by sending its LSAs to the multicast address 224.0.0.5 on each OSPF interface. In this case, R2 floods its LSAs out its interface on the 10.10.2.0/30 network, where it will be received and processed by Router R1. Router R1 will acknowledge receipt of the LSA update and then forward the LSA information on to network 10.10.1.0/30, where it will be received and processed by Router R3. R3 will perform the same acknowledgement and forwarding process as R1, and this process continues until every OSPF router has received and acknowledged the LSA update information.
Figure 9.9 The LSA with sequence number 123 is flooded from router R2 throughout the OSPF network. Every router will receive the LSA update and acknowledge its receipt. If a router has a sequence number for the LSA that is higher than the one it receives, it will discard the LSA with sequence number 123 and update its adjacent router with the more current LSA.
Figure 9.10 Router R1 has a configured OSPF metric of 674 on its interface to network 10.10.2.0/30. It is using the default metric on its interface to network 10.10.1.0/30 (metric 100 indicates a 1-Gbps interface). The default metric for the system interface is 0.
Figure 9.11 This configuration of three routers allows us to explore the key aspects of OSPF routing.
10 BGP
Figure 10.1 Network traffic between the enterprise and the content provider must traverse many ISPs. Each ISP runs its own choice of IGP configured to its particular needs. BGP provides a common protocol to accommodate routing between all of the ISPs.
Figure 10.2 Public AS numbers range from 0 to 64,511. Private AS numbers range from 64,512 to 65,535. Public AS numbers are needed to peer with other ASes, while private AS numbers can be used within an AS. AS 200 and AS 400 are peering with AS 300, so all AS numbers are public. Inside AS 300, private AS numbers can be used such as 65,002 and 65,003.
Figure 10.3 Peer connections between routers in different ASes are known as
external BGP
(eBGP) sessions, while peer connections within the same AS are known as
internal BGP (iBGP)
sessions. The routers in AS 65,004 and AS 65,001 have an eBGP session with routers in AS 65,002. Peering inside AS 65,002 are iBGP sessions.
Figure 10.4 In a simple BGP configuration, Router C would have two equal cost paths from its AS 65,250 to AS 65,200, one through its eBGP session to Router B and one through its eBGP session to Router A. Because both Router B and Router A are in the same AS, the path cost for each eBGP session is the same.
Figure 10.5 BGP peer establishment functions much like other protocols that use TCP. Both BGP peers initiate TCP connections to create the session. Once a session is established,
OPEN
messages are exchanged, and then one of the TCP sessions is removed. Once the peers have exchanged keep-alive messages, the session is established and they can exchange updates. Afterward, keep-alive messages are exchanged and update messages sent when there is a network change.
Figure 10.6 Different customers with different BGP peering requirements. Customer 1 has only a single connection to the Internet, but Customer 2 has multiple connections to different providers.
Figure 10.7 The customer has multiple BGP connections to different providers. Internally, the customer routers will determine the best route based on OSPF, and BGP will be used to determine the best external AS to forward packets to for Internet routes.
Figure 10.8 ISP interconnections will use BGP to allow transit traffic destined for other ASes to flow through it to the next AS in the path.
Figure 10.9 This configuration of three routers provides an example of configuring both an iBGP and an eBGP peering session.
11 MPLS and VPN Services
Figure 11.1: These are the key components of a Services-Bases Network: Customer Edge, Provider Edge, and Provider Core.
Figure 11.2: A PE router provides a Service Access Point (SAP) to a subscriber/customer that connects customers to individual service offerings. The PE connects to a Service Distribution Point (SDP) that provides tunneling services through the provider’s core network.
Figure 11.3: Packets from the source CE arrive at the ingress PE and are encapsulated with a tag that allows them to be forwarded through the provider network along specific paths. The forwarding path is based on this provider-created tag. The tag is removed before the packet is forwarded to the destination CE so that the original packet arrives at the destination CE unchanged.
Figure 11.4: Routers in an MPLS network are characterized as Label Edge Routers (LERs) and Label Switch Routers (LSRs). LERs control the addition and removal of provider labels, while LSRs simply forward the customer data based on the provider labels.
Figure 11.5: Router 3 uses LDP to send information to Router 2 associating label 20 with networks 10.1.1.0/24 and 10.1.20.0/24. Router 3 will place a 20 label on all packets it receives with a destination to Router 3 networks and forward it out Interface 1.
Figure 11.6: Router 2 uses LDP to update Router 1 with label information about the networks on Router 3. Router 2 sends label 10 to Router 1, so Router 1 will tag any packets with a destination address of the networks on Router 3 with a 10 tag and forward it to Router 2. Router 2 will swap the 10 tag with a 20 tag and forward the packets on to Router 3.
Figure 11.7: A complete tunnel is set up between Router 1 and Router 3. Router 1 is the provider ingress and will apply a 10 label to packets with a destination of 10.1.1.0/24 or 10.1.2.0/24. The packet is forwarded to Router 2 with label 10, and Router 2 swaps label 20 for label 10 and forwards it to Router 3. Router 3 pops the label from the packet and forwards the original packet to its destination CE.
Figure 11.8: PE1 applies an MPLS label to the packet from CE1. The packet is then forwarded through the provider network devices P2 and P3 based entirely on the MPLS label until it arrives at PE2. PE2 removes the label and forwards the packet unchanged to CE2.
Figure 11.9: CE routers have no knowledge of the MPLS labeling process inside the provider network. Labels are assigned at the ingress PE router and removed at the egress PE router. Within the provider cloud, packets are switched from one interface to another on P routers based entirely on the labels applied to packets by the PE routers. Two LSPs are required for bidirectional data transfer.
Figure 11.10: VPWS provides a simple Layer 2 service that emulates a point-to-point line between two sites. From the CE perspective, the MPLS network is a single wire that connects them.
Figure 11.11: In a VPWS service, the MPLS network uses both an MPLS tag for transporting the data and an inner service label for de-multiplexing the service at the SDP. PE2 pops the MPLS tag and then determines which service the frame belongs to based on the service label before forwarding the packet to CE2.
Figure 11.12: VPLS emulates a private LAN service that can connect multiple customer sites. From the customer’s perspective, the service is a simple Layer 2 switch service. Multiple services can be provided over a single CE by using different SAPs at the provider ingress.
Figure 11.13: In a VPLS, the PE devices maintain an FDB to keep track of MAC addresses, service IDs, and SAPs. The PEs will tag the packets with the appropriate service ID and MPLS label for forwarding through the MPLS network.
Figure 11.14: A VPRN provides a virtual routed network connection for multiple customers. Each customer can have its own private address space, and the PE will maintain a separate VRF for each customer to separate the IP addressing.
Figure 11.15: The Service Provider Core network provides the infrastructure to connect geographically separate customer equipment in a seamless, invisible fashion.
Figure 11.16: Customer equipment connects to the Provider Edge via a SAP. To the customer, the SAP in a VPLS behaves identically to a port on an Ethernet switch.
Appendix B: Lab Exercises and Solutions
Figure 5.32 A simplified diagram shows how an ISP connects to multiple sites of two different enterprises.
Figure 5.33 The Lab exercise uses this topology to demonstrate a simple IP addressing scheme.
Figure 8.21 Three different types of static routes are used from CE to PE and from PE to PE in this lab exercise.
Figure 9.11 This configuration of three routers allows us to explore the key aspects of OSPF routing.
Figure 10.9 This configuration of three routers provides an example of configuring both an iBGP and an eBGP peering session.
Figure 11.15: The Service Provider Core network provides the infrastructure to connect geographically separate customer equipment in a seamless, invisible fashion.
Figure 11.16: Customer equipment connects to the Provider Edge via a SAP. To the customer, the SAP in a VPLS behaves identically to a port on an Ethernet switch.
Guide
Cover
Table of Contents
Begin Reading
Pages
C1
i
ii
iii
iv
v
vi
vii
viii
ix
xix
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573