Please enable JavaScript.
Coggle requires JavaScript to display documents.
CN (Computer Networks, Serial Communication Protocol, PORTS) - Coggle…
CN
Computer Networks
Protocol
Protocol Layering
When the communication is complex, we may need to divide the task between different layers, in which case we need a protocol at
each layer, or protocol layering
-
Logical Connection
Host Layer to destination layer logical connection without thinking of the the lower layers which support the connection.
A protocol defines the rules that both the sender and receiver and all intermediate devices need to follow to be able to communicate effectively
TCP/IP
Internet/Network
The network layer is responsible for creating a connection between the source computer and the destination computer
-
Since there can be several routers from the source to the destination, the router in the path are responsible for choosing the best route for each packet
The network layer is responsible for host-to-host communication and routing the packet through possible routes
Protocols
Internet Protocol
IP is also responsible for routing a packet from its source to its destination, which is achieved by each router forwarding the datagram to the next router in its path
IP is a connectionless protocol that provides no flow control, no error control, and no congestion control services
The network layer in the Internet includes the main protocol, Internet Protocol (IP), that defines the format of the packet, called a datagram at the network layer
-
Routing Protocol
-
A routing protocol does not take part in routing (it is the responsibility of IP), but it creates forwarding tables for routers to help them in the routing process.
Auxiliary Protocols
-
-
DHCP
The Dynamic Host Configuration Protocol (DHCP) helps IP to get the network-layer address for a host.
ARP
The Address Resolution Protocol (ARP) is a protocol that helps IP to find the link-layer address of a host or a router when its network-layer address is given
-
Transport
-
The transport layer at the source host gets the message from the application layer, encapsulates it in a transport-layer packet (called a segment or a user datagram in different protocols) and sends it, through the logical (imaginary) connection, to the transport layer at the destination host
The transport layer is responsible for giving services to the application layer: to get a message from an application program running on the source host and deliver it to the corresponding application program on the destination host.
Protocols
TCP
The main protocol, Transmission Control Protocol (TCP), is a
connection-oriented protocol that first establishes a logical connection between transport layers at two hosts before transferring data.
-
TCP provides
Flow control (matching the sending data rate of the source host with the receiving data rate of the destination host to prevent overwhelming the destination)
Error control (to guarantee that the segments arrive at the destination without error and resending the corrupted ones)
-
UDP
User Datagram Protocol (UDP), is a connectionless protocol that
transmits user datagrams without first creating a logical connection
In UDP, each user datagram is an independent entity without being related to the previous or the next one (the meaning of the term connectionless).
UDP is a simple protocol that does not provide flow, error, or congestion control
Its simplicity, which means small overhead, is attractive to an application program that needs to send short messages and cannot afford the retransmission of the packets involved in TCP, when a packet is corrupted or lost
SCTP
Stream Control Transmission Protocol (SCTP) is designed to
respond to new applications that are emerging in the multimedia
-
Application
Communication at the application layer is between two processes (two programs running at this layer)
To communicate, a process sends a request to the other process
and receives a response.
-
-
Overview
Application, Network and
Transport layers are E2E
-
-
-
-
-
Addressing
Any communication that involves two parties needs two addresses: source address and destination address
Although it looks as if we need five pairs of addresses, one pair per layer, we normally have only four because the physical layer does not need addresses; the unit of data exchange at the physical layer is a bit, which definitely cannot have an address
Application Address
At the application layer, we normally use names to define the site that provides services, such as someorg.com, or the e-mail address, such as somebody@coldmail.com
Transport Address
At the transport layer, addresses are called port numbers, and these define the application-layer programs at the source and
destination
-
Network Layer
At the network-layer, the addresses are global, with the whole
Internet as the scope
-
Link Layer
The link-layer addresses, sometimes called MAC addresses, are
locally defined addresses, each of which defines a specific host or router in a network (LAN or WAN)
-
OSI Model
An ISO standard that covers all aspects of network communications is the Open Systems Interconnection (OSI)
model
The purpose of the OSI model is to show how to facilitate communication between different systems without requiring
changes to the logic of the underlying hardware and software
The OSI model is not a protocol; it is a model for understanding and designing a network architecture that is flexible, robust, and interoperable
-
The OSI model is a layered framework for the design of network systems that allows communication between all types of computer systems
It consists of seven separate but related layers, each of which defines a part of the process of moving information
across a network
-
OSI Vs TCP/IP
When we compare the two models, we find that two layers, session and presentation, are missing from the TCP/IP protocol suite
-
The application layer in the suite is usually considered to be the combination of three layers in the OSI model
-
-
Layers
Datalink Layer
Sublayers
Data Link Control(DLC)
Functions
Framing
Framing in the data-link layer separates a message from one source to a destination by adding a sender address and a destination address
The destination address defines where the packet is to go; the sender address helps the recipient acknowledge the receipt
Frame Size
Fixed Size Framing
- 2 more items...
Variable Sized Framing
- 2 more items...
Flow and Error Control
Flow Control
Whenever an entity produces items and another entity consumes them, there should be a balance between production and consumption rates
If the items are produced faster than they can be consumed, the consumer can be overwhelmed and may need to discard some items
-
Error Control
- 3 more items...
-
DLC Protocol
Connectionless Protocol
In a connectionless protocol, frames are sent from one node to the next without any relationship between the frames; each frame is independent
-
-
-
Protocols
Simple
-
We assume that the receiver can immediately handle any frame it receives, i.e. the receiver can never be overwhelmed with incoming frames
The data-link layer at the sender gets a packet from its network layer, makes a frame out of it, and sends the frame
The data-link layer at the receiver receives a frame from the link, extracts the packet from the frame, and delivers the packet to its network layer
The data-link layers of the sender and receiver provide transmission services for their network layers
Stop-and-Wait
-
In this protocol, the sender sends one frame at a time and waits for an acknowledgment before sending the next one
To detect corrupted frames, we need to add a CRC to each data frame
When a frame arrives at the
receiver site, it is checked
- 2 more items...
Every time the sender sends
a frame, it starts a timer
- 4 more items...
State Machine
- 2 more items...
-
Piggybacking
-
When node A is sending data to node B, Node A also acknowledges the data received from node B
Because piggybacking makes communication at the datalink
layer more complicated, it is not a common practice
-
-
Link Layer Addressing
When a datagram passes from the network layer to the data-link layer, the datagram will be encapsulated in a frame and two data link addresses are added to the frame header
A link-layer address is sometimes called a link address, sometimes a physical address, and sometimes a MAC address
-
Types
-
Multicast Address
-
Multicasting means one-to-many communication. However, the jurisdiction is local (inside the link)
-
-
Ethernet
-
-
Generations
-
Gigabit Ethernet
The goals of the Gigabit Ethernet were to upgrade the data rate to 1 Gbps, but keep the address length, the frame format, and the maximum and minimum frame length the same, along with autonegotiation
-
-
Standard Ethernet
Characteristics
-
-
Frame Length
-
Minimum Length
- 5 more items...
Maximum Length
- 3 more items...
Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own network interface card (NIC)
-
The Ethernet address is 6 bytes (48 bits), normally written in
hexadecimal notation, with a colon between the bytes
-
Cast
If the least significant bit of the first byte in a destination address is 0, the address is unicast; otherwise, it is multicast
-
Access Method
-
The designer of the standard Ethernet actually put a restriction of 2500 meters because we need to consider the delays encountered throughout the journey
-
Efficiency
The efficiency of the Ethernet is defined as the ratio of the time used by a station to send data to the time the medium is occupied by this station
-
a
-
-
As the value of parameter a decreases, the efficiency increases. This means that if the length of the media is shorter or the frame size longer, the efficiency increases
-
Enhancements
Bridged Ethernet
If only one station has frames to send, it benefits from the total capacity (10 Mbps)
But if more than one station needs to use the network, the capacity is shared
Bridge
-
Bandwidthwise, each network is independent
In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all stations with a frame to send; the stations share the bandwidth of the network
Switched Ethernet
Instead of having two to four networks, why not have N networks, where N is the number of stations on the LAN
-
-
Full-Duplex Ethernet
-
In full-duplex switched Ethernet, there is no need for the CSMA/CD method
In a fullduplex switched Ethernet, each station is connected to the switch via two separate links
-
MAC Control
To provide for flow and error control in full-duplex switched Ethernet, a new sublayer, called the MAC control, is added between the LLC sublayer and the MAC sublayer
Hubs
-
Signals that carry information within a network can travel a fixed distance before attenuation endangers the integrity of the data
A repeater receives a signal and, before it becomes too weak or corrupted, regenerates and retimes the original bit pattern
-
Usage
When Ethernet LANs were using bus topology, a repeater was used to connect two segments of a LAN to overcome the length restriction of the coaxial cable
In a star topology, a repeater is a multiport device, often called a hub, that can be used to serve as the connecting point and at the same time function as a repeater
-
Switch
-
As a physical-layer device, it regenerates the signal it receives
As a link-layer device, the link-layer switch can check the MAC addresses (source and destination) contained in the frame
Filtering
-
It can check the destination address of a frame and can decide from which outgoing port the frame should be sent
-
-
Transparent Switches
A transparent switch is a switch in which the stations are completely unaware of the switch’s existence
If a switch is added or deleted from the system, reconfiguration of
the stations is unnecessary
-
-
Routers
A router is a three-layer (physical, data-link, and network) device
As a physical-layer device, it regenerates the signal it receives
As a link-layer device, the router checks the physical addresses (source and destination) contained in the packet
As a network-layer device, a router checks the network-layer addresses
A router can connect networks. In other words, a router is an internetworking device; it connects independent networks to form an internetwork
A router changes the link-layer address of the packet (both source and destination) when it forwards the packet
VLAN
A local area network configured by software, not by physical wiring
-
-
-
-
-
-
Transmission Time
If there are multiple routers then there will be transmission delay for each router on top of the source node
-
The propagation delay will be applicable for each link between source and router ,router and router, and router and sink
For multiple packet(say n packets) we can calculate the propagation time for one packet and add it with (n-1) x transmission time
-
-
First packet arrives in n*P time where n is the number of links and P is propagation time of each link
-
-
-
-
Network Layer
Goals
The network layer in the TCP/IP protocol suite is responsible for the host-to-host delivery of datagrams
-
Network Layer Services
Packetizing
The first duty of the network layer is packetizing: encapsulating the payload (data received from upper layer) in a network-layer packet at the source and decapsulating the payload from the network-layer packet at the destination
Duty of the network layer is to carry a payload from the source to the destination without changing it or using it
The network layer is doing the service of a carrier such as
the postal office, which is responsible for delivery of packages from a sender to a receiver without changing or using the contents
The source host receives the payload from an upper-layer protocol, adds a header that contains the source and destination addresses and some other information that is required by the network-layer protocol and delivers the packet to the data-link layer
The source is not allowed to change the content of the payload unless it is too large for delivery and needs to be fragmented
The destination host receives the network-layer packet from its data-link layer, decapsulates the packet, and delivers the payload to the corresponding upper-layer protocol
If the packet is fragmented at the source or at routers along the path, the network layer is responsible for waiting until all fragments arrive, reassembling them, and delivering them to the upper-layer protocol
The routers in the path are not allowed to decapsulate the packets they received unless the packets need to be fragmented
The routers are not allowed to change source and destination addresses either. They just inspect the addresses for the purpose of forwarding the packet to the next network on the path
If a packet is fragmented, the header needs to be copied to all fragments and some changes are needed
Routing and Forwarding
Routing
-
-
-
This is done by running some routing protocols to help
the routers coordinate their knowledge about the neighborhood and to come up with consistent tables to be used when a packet arrives
Forwarding
Forwarding can be defined as the action applied by each router when a packet arrives at one of its interfaces
The decision-making table a router normally uses for applying this action is sometimes called the forwarding table and sometimes the routing table
When a router receives a packet from one of its attached networks, it needs to forward the packet to another attached network (in unicast routing) or to some attached networks (in multicast routing)
Packet Switching
Approaches
Datagram Approach
When the network layer provides a connectionless service, each packet traveling in the Internet is an independent entity; there is no relationship between packets belonging to the same message
-
A packet belonging to a message may be followed by a packet belonging to the same message or to a different message
-
Each packet is routed based on the information contained in its header: source and destination addresses
-
Virtual-Circuit Approach
In a connection-oriented service (also called virtual-circuit approach), there is a relationship between all packets belonging to a message
Before all datagrams in a message can be sent, a virtual connection should be set up to define the path for the datagrams
After connection setup, the datagrams can all follow the same path
In this type of service, not only must the packet contain the source and destination addresses, it must also contain a
flow label, a virtual circuit identifier that defines the virtual path the packet should follow
Part of the packet path may still be using the connectionless service, so source and destination address are still added to the packet
Phases
Setup Phase
In the setup phase, a router creates an entry for a virtual circuit
Two auxiliary packets need to be exchanged between the sender and the receiver: the request packet and the acknowledgment
packet
Request packet
- 2 more items...
Acknowledgment Packet
- 1 more item...
-
-
Performance
Delay
-
-
-
-
Total Delay
-
If we have n routers, we have (n + 1) links
-
-
-
-
Throughput
Throughput at any point in a network is defined as the number of bits passing through the point in a second, which is actually the transmission rate of data at that point
In a path from source to destination, a packet may pass through several links (networks), each with a different transmission rate
Throughput = minimum {TR1, TR2, . . . TRn}
The actual situation in the Internet is that the data normally passes through two access networks and the Internet backbone
The Internet backbone has a very high transmission rate, in the range of gigabits per second
This means that the throughput is normally defined as the minimum transmission rate of the two access links that connect the source and destination to the backbone
Shared Link
-
-
The transmission rate of the link between the two routers is actually shared between the flows and this should be considered when we calculate the throughput
Packet Loss
Another issue that severely affects the performance of communication is the number of packets lost during transmission
When a router receives a packet while processing another packet, the received packet needs to be stored in the input buffer waiting for its turn
A router, however, has an input buffer with a limited size
-
The effect of packet loss on the Internet network layer is that the packet needs to be resent, which in turn may create overflow and cause more packet loss
IPv4 Addressing
An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a host or a router to the Internet
The IP address is the address of the connection, not the host or the router, because if the device is moved to another network, the IP address may be changed
IPv4 addresses are unique in the sense that each address defines one, and only one, connection to the Internet
If a device has two connections to the Internet, via two
networks, it has two IPv4 addresses
IPv4 addresses are universal in the sense that the addressing system must be accepted by any host that wants to be connected to the Internet
Address Space
IPv4 uses 32-bit addresses, which means that the address space is 232 or 4,294,967,296 (more than four billion)
Hierarchical Addressing
A 32-bit IPv4 address is also hierarchical, but divided only into two parts
Parts
Prefix/Network
The first part of the address, called the prefix, defines the network
Suffix/Host
The second part of the address, called the suffix, defines the node (connection of a device to the Internet)
-
Classful Addressing
The whole address space was divided into five classes (class A, B, C, D, and E)
Classes
Class A
-
-
In class A, the network length is 8 bits, but since the first bit, which is 0, defines the class, we can have only seven bits as the network identifier
Class B
In class B, the network length is 16 bits, but since the first two bits, which are (10)2, define the class, we can have only 14 bits as the network identifier
This means there are only 214 = 16,384 networks in the world that can have a class B address
-
Class C
-
In class C, the network length is 24 bits, but since three bits define the class, we can have only 21 bits as the network identifier
This means there are 221 = 2,097,152 networks in the world that can have a class C address
-
-
-
Address Depletion
-
Since the addresses were not distributed properly, the Internet was faced with the problem of the addresses being rapidly used up, resulting in no more addresses available for organizations
and individuals that needed to be connected to the Internet
Since there may be only a few organizations that are as large as class A, most of the addresses in this class were wasted (unused)
Class B addresses were designed for midsize organizations, but many of the addresses in this class also remained unused
Class C addresses have a completely different flaw in design. The number of addresses that can be used in each network (256)
was so small that most companies were not comfortable using a block in this address class
Class E addresses were almost never used, wasting the whole class
Solution
Subnetting/Supernetting
To alleviate address depletion, two strategies were proposed and, to some extent, implemented: subnetting and supernetting
-
Classless Addressing
In classless addressing, variable-length blocks are used that belong to no classes
-
-
-
-
-
-
Special Addresses
This-Host Address
-
It is used whenever a host needs to send an IP datagram but it does not know its own address to use as the source address
-
Loopback Address
-
A packet with one of the addresses in this block as the destination address never leaves the host; it will remain in
the host
-
Private Addresses
-
-
-
-
The Internet Engineering Task Force (IETF) has directed the Internet Assigned Numbers Authority (IANA) to reserve the following IPv4 address ranges for private networks
-
Note:
Last IP address which can be assigned to a host is not all 1s on the host bits as it is reserved for broadcast. Make the last be 0.
DHCP
The Dynamic Host Configuration Protocol (DHCP) is a network management protocol used on Internet Protocol (IP) networks
A DHCP server enables computers to request IP addresses and networking parameters automatically from the Internet service provider (ISP), reducing the need for a network administrator or a user to manually assign IP addresses to all network devices
DHCP server dynamically assigns an IP address and other network configuration parameters to each device on the network, so they can communicate with other IP networks
-
DHCP is a client-server protocol in which the client sends a request message and the server returns a response message
-
DHCP Protocol
Step 1
This message is encapsulated in a UDP user
datagram with the source port set to 68 and the destination port set to 67
The user datagram is encapsulated in an IP datagram with the source address set to 0.0.0.0 (“this host”) and the destination address set to 255.255.255.255 (broadcast address)
-
The joining host creates a DHCPDISCOVER message in which only the transaction-ID field is set to a random number
Step 2
-
This message is encapsulated in a user datagram with the same port numbers, but in the reverse order
The user datagram in turn is encapsulated in a datagram with the
server address as the source IP address, but the destination address is a broadcast address, in which the server allows other DHCP servers to receive the offer and give a better offer if they can
The DHCP server or servers (if more than one) responds with a DHCPOFFER message in which the your address field defines the offered IP address for the joining host and the server address field includes the IP address of the server
Step 3
-
-
-
-
The user datagram is encapsulated in an IP datagram with the source address set to the new client address, but the destination address still is set to the broadcast address to let the other servers know that their offer was not accepted
Step 4
Finally, the selected server responds with a DHCPACK message to the client if the offered IP address is valid
If the server cannot keep its offer (for example, if the
address is offered to another host in between), the server sends a DHCPNACK message and the client needs to repeat the process
-
Ports
We said that the DHCP uses two well-known ports (68 and 67) instead of one well-known and one ephemeral
The reason for choosing the well-known port 68 instead of an ephemeral port for the client is that the response from the server to the client is broadcast
Example
DHCP client and a DAYTIME client, for example, are both waiting to receive a response from their corresponding server and both have accidentally used the same temporary port number (56017, for example)
-
Both hosts receive the response message from the DHCP server and deliver the message to their clients
The DHCP client processes the message; the DAYTIME client is totally confused with a strange message received
-
The response message from the DHCP server is not delivered to the DAYTIME client, which is running on the port number 56017, not 68
-
FTP
In this mode the server does not send all of the information that a client may need for joining the network
In the DHCPACK message, the server defines the pathname of a file in which the client can find complete information such as the address of the DNS server
-
Error Control
DHCP uses the service of UDP, which is not reliable
Strategies
-
DHCP client uses timers and a retransmission policy if it does not receive the DHCP reply to a request
To prevent a traffic jam when several hosts need to retransmit a request (for example, after a power failure), DHCP forces the client to use a random number to set its timers
Transition States
-
INIT
When the DHCP client first starts, it is in the INIT state (initializing state).
-
SELECTING
When it receives an offer, the client goes to the SELECTING state
While it is there, it may receive more offers
REQUESTING
After it selects an offer, it sends a request message and goes to the REQUESTING state
BOUND
If an ACK arrives while the client is in this state, it goes to the BOUND state and uses the IP address
RENEWING
When the lease is 50 percent expired, the client tries to renew it by moving to the RENEWING state
If the server renews the lease, the client moves to the BOUND state again
REBINDING
If the lease is not renewed and the lease time is 75 percent expired, the client moves to the REBINDING state
If the server agrees with the lease (ACK message arrives), the client moves to the BOUND state and continues using the IP address; otherwise, the client moves to the INIT state and requests another IP address
Note that the client can use the IP address only when
it is in the BOUND, RENEWING, or REBINDING state
-
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information to the DHCP clients
NAT
The technology allows a site to use a set of private addresses for internal communication and a set of global Internet addresses (at least one) for communication with the rest of the world
The site must have only one connection to the global Internet through a NAT-capable router that runs NAT software
The router that connects the network to the global address uses one private address and one global address
The private network is invisible to the rest of the Internet; the rest of the Internet sees only the NAT router
Address Translation
All of the outgoing packets go through the NAT router, which replaces the source address in the packet with the global NAT address
All incoming packets also pass through the NAT router, which replaces the destination address in the packet (the NAT router global address) with the appropriate private address
Translation Table
Mapping destination address in packets received from external size from NAT address to private address
Modes
One IP Address
In its simplest form, a translation table has only two columns: the private address and the external address (destination address of the packet)
When the router translates the source address of the outgoing packet, it also makes note of the destination address — where the packet is going
When the response comes back from the destination, the
router uses the source address of the packet (as the external address) to find the private address of the packet
In this strategy, communication must always be initiated by the private network
The use of only one global address by the NAT router allows only one private-network host to access a given external host
Pool of IP Addresses
-
Number of private hosts that can connect to a site is equal to the number of global addresses in the pool
IP and Port Addresses
To allow a many-to-many relationship between private-network hosts and external server programs, we need more information in the translation table
If the translation table has five columns, instead of two, that include the source and destination port addresses and the transport-layer protocol, the ambiguity is eliminated
The combination of source address and destination port address defines the private network host to which the response should be directed
-
Internet Protocol
-
Protocols
The Internet Control Message Protocol version 4 (ICMPv4) helps IPv4 to handle some errors that may occur in the network-layer delivery
-
The Address Resolution Protocol (ARP) is used to glue the network and data-link layers in mapping network-layer addresses to link-layer addresses
IPv4
The main protocol, Internet Protocol version 4 (IPv4), is responsible for packetizing, forwarding, and delivery of a packet at the network layer
Best Effort
The term best-effort means that IPv4 packets can be corrupted, be lost, arrive out of order, or be delayed, and may create congestion for the network
If reliability is important, IPv4 must be paired with a reliable transport-layer protocol such as TCP
-
Connectionless
-
This means that each datagram is handled independently, and each datagram can follow a different route to the destination
This implies that datagrams sent by the same source to the
same destination could arrive out of order
-
Datagram
-
-
-
Parts
-
Payload(data)
Payload, or data, is the main reason for creating a datagram
-
Fragmentation
-
Each router decapsulates the IP datagram from the frame it receives, processes it, and then encapsulates it in another frame
The format and size of the received frame depend on the protocol used by the physical network through which the frame has just traveled
The format and size of the sent frame depend on the protocol used by the physical network through which the frame is going to travel
-
When a datagram is fragmented, each fragment has its own header with most of the fields repeated, but some have been changed
-
-
The reassembly of the datagram, however, is done only by the destination host, because each fragment becomes an independent datagram
Whereas the fragmented datagram can travel through different routes, and we can never control or guarantee which route a
fragmented datagram may take, all of the fragments belonging to the same datagram should finally arrive at the destination host
Fragmentation process
-
Divide the length of the first fragment by 8. The second fragment has an offset value equal to that result
Divide the total length of the first and second fragment by 8. The third fragment has an offset value equal to that result
-
-
ICMPv4
-
The Internet Control Message Protocol version 4 (ICMPv4) has been designed to compensate for the above two deficiencies
-
-
-
-
When an IP datagram encapsulates an ICMP message, the value of the protocol field in the IP datagram is set to 1 to indicate that the IP payroll is an ICMP message
Messages
Error-reporting messages
The error-reporting messages report problems that a router or a host (destination) may encounter when it processes an IP packet
Query messages
The query messages, which occur in pairs, help a host or a network manager get specific information from a router or another host
Format
-
-
-
Common Fields
- 3 more items...
Error
Error messages are always sent to the original source because the only information available in the datagram about the route is the source and destination IP addresses
No error message will be generated for a datagram having a
multicast address or special address (such as this host or loopback).
-
-
All error messages contain a data section that includes the IP header of the original datagram plus the first 8 bytes of data in that datagram
The original datagram header is added to give the original source, which receives the error message, information about the datagram itself
The 8 bytes of data are included because the first 8 bytes provide information about the port numbers (UDP and TCP) and sequence number (TCP)
-
-
-
Routing Protocol
Unicast Routing
In unicast routing, a packet is routed, hop by hop, from its source to its destination by the help of forwarding tables
The source host needs no forwarding table because it delivers its packet to the default router in its local network
The destination host needs no forwarding table either because it receives the packet from its default router in its local network
This means that only the routers that glue together the networks in the internet need forwarding tables
Routing a packet from its source to its destination means routing the packet from a source router (the default router of the source host) to a destination router (the router connected to the destination network)
Routing Algorithms
Distance-Vector Routing
In distance-vector routing, the first thing each node creates is its own least-cost tree with the rudimentary information it has about its immediate neighbors
The incomplete trees are exchanged between immediate neighbors to make the trees more and more complete and to represent the whole internet
A router continuously tells all of its neighbors what it knows
about the whole internet (although the knowledge can be incomplete)
-
Bellman-Ford Equation
- 7 more items...
Distance Vector
- 18 more items...
Protocols
- 2 more items...
-
Problem Types
- 3 more items...
-
Requires no network information like topology, load condition ,cost of diff. paths
-
All possible routes between Source and Destination is tried. A packet will always get through if path exists
As all routes are tried, there will be atleast one route which is the shortest
-
Limitations
- 2 more items...
Hop-Count
- 3 more items...
Advantages
- 4 more items...
Selective Flooding
- 1 more item...
Types
Static Routing
- 6 more items...
Dynamic Routing
- 6 more items...
Least-Cost Routing
When an internet is modeled as a weighted graph, one of the ways to interpret the best route from the source router to the destination router is to find the least cost between the two
The source router chooses a route to the destination router in
such a way that the total cost for the route is the least cost among all possible routes
Each router needs to find the least-cost route between itself and all the other routers to be able to route a packet using this criteria
Least-Cost Trees
If there are N routers in an internet, there are (N − 1) least-cost paths from each router to any other router
-
If we have only 10 routers in an internet, we need 90 least-cost paths
-
A least-cost tree is a tree with the source router as the root that spans the whole graph (visits all other nodes) and in which
the path between the root and any other node is the shortest
In this way, we can have only one shortest-path tree for each node; we have N least-cost trees for the whole internet
Properties
- 2 more items...
Internet as a Graph
To find the best route, an internet can be modeled as a graph
-
To model an internet as a graph, we can think of each router as a node and each network between a pair of routers as an edge
An internet is, in fact, modeled as a weighted graph, in which each edge is associated with a cost
If a weighted graph is used to represent a geographical area,
the nodes can be cities and the edges can be roads connecting the cities; the weights, in this case, are distances between cities
In routing, however, the cost of an edge has a different interpretation in different routing protocols
-
If there is no edge between the nodes, the cost is infinity
Transport Layer
-
It provides a process-to-process communication between two application layers, one at the local host and the other at the remote host
Communication is provided using a logical connection, which means that the two application layers, which can be located in
different parts of the globe, assume that there is an imaginary direct connection through which they can send and receive messages
Services
-
Congestion Control
Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle
Congestion control refers to the mechanisms and techniques
that control the congestion and keep the load below the capacity
Congestion in a network or internetwork occurs because routers and switches have queues—buffers that hold the packets before and after processing
A router, for example, has an input queue and an output queue for each interface
If a router cannot process the packets at the same rate at which they arrive, the queues become overloaded and congestion occurs
Congestion at the transport layer is actually the result of congestion at the network layer, which manifests itself at the transport layer
-
Flow control
Whenever an entity produces items and another entity consumes them, there should be a balance between production and consumption rates
If the items are produced faster than they can be consumed, the consumer can be overwhelmed and may need to discard some items
Pushing or Pulling
If the sender delivers items whenever they are produced ⎯ without a prior request from the consumer ⎯ the delivery is referred to as pushing
If the producer delivers the items after the consumer has requested them, the delivery is referred to as pulling
When the producer pushes the items, the consumer may be overwhelmed and there is a need for flow control, in the opposite direction, to prevent discarding of the items
The consumer needs to warn the producer to stop the delivery and to inform the producer when it is again ready to receive the items
When the consumer pulls the items, it requests them when it is ready. In this case, there is no need for flow control
Entities
-
-
Receiver transport layer
-
It is the consumer for the packets received from the sender and the producer that decapsulates the messages and delivers them to the application layer
Receiver process
The last delivery, however, is normally a pulling delivery; the transport layer waits until the application-layer process asks for
messages
-
Buffers
Although flow control can be implemented in several ways, one of the solutions is normally to use two buffers
-
-
-
When the buffer of the sending transport layer is full, it informs the application layer to stop passing chunks of messages
When there are some vacancies, it informs the application layer that it can pass message chunks again
When the buffer of the receiving transport layer is full, it informs the sending transport layer to stop sending packets
When there are some vacancies, it informs the sending transport layer that it can send packets again.
-
-
-
Flow + Error Control
Flow control requires the use of two buffers, one at the sender site and the other at the receiver site
-
These two requirements can be combined if we use two numbered buffers, one at the sender, one at the receiver
At the sender, when a packet is prepared to be sent, we use the number of the next free location, x, in the buffer as the sequence number of the packet
When the packet is sent, a copy is stored at memory location x, awaiting the acknowledgment from the other end
When an acknowledgment related to a sent packet arrives, the packet is purged and the memory location becomes free
At the receiver, when a packet with sequence number y arrives, it is stored at the memory location y until the application layer is ready to receive it
-
Sliding Window
Since the sequence numbers use modulo 2^m, a circle can represent the sequence numbers from 0 to 2^m − 1
The buffer is represented as a set of slices, called the
sliding window, that occupies part of the circle at any time
At the sender site, when a packet is sent, the corresponding slice is marked
When all the slices are marked, it means that the buffer is full and no further messages can be accepted from the application layer
When an acknowledgment arrives, the corresponding slice is unmarked
If some consecutive slices from the beginning of the window are unmarked, the window slides over the range of the corresponding sequence numbers to allow more free slices at the end of the
window
Protocols
Stop-and-Wait Protocol
Stop-and-Wait protocol, which uses both flow and error control
-
-
To detect corrupted packets, we need to add a checksum to each data packet
When a packet arrives at the receiver site, it is checked
If its checksum is incorrect, the packet is corrupted and silently discarded
-
Every time the sender sends a packet, it starts a timer
If an acknowledgment arrives before the timer expires, the timer is stopped and the sender sends the next packet (if it has one to send)
If the timer expires, the sender resends the previous packet, assuming that the packet was either lost or corrupted
-
-
Sequence Numbers
The sequence is 0, 1, 0, 1, 0, and so on
Acknowledgment Numbers
The acknowledgment numbers always announce the sequence number of the next packet expected by the receiver
If packet 0 has arrived safe and sound, the receiver sends an ACK with acknowledgment 1 (meaning packet 1 is expected next)
If packet 1 has arrived safe and sound, the receiver sends
an ACK with acknowledgment 0 (meaning packet 0 is expected)
States
Sender
-
Ready state
When the sender is in this state, it is only waiting for one event to
occur
If a request comes from the application layer, the sender creates a packet with the sequence number set to S
A copy of the packet is stored, and the packet is sent
-
-
Receiver
-
Events
-
If an error-free packet with seqNo ≠ R arrives, the packet is discarded, but an ACK with ackNo = R is sent
If a corrupted packet arrives, the packet is discarded
-
Go-Back-N
We can send several packets before receiving acknowledgments, but the receiver can only buffer one packet
-
-
Acknowledgment Numbers
An acknowledgment number in this protocol is cumulative and defines the sequence number of the next packet expected
For example, if the acknowledgment number (ackNo) is 7, it means all packets with sequence number up to 6 have arrived, safe and sound, and the receiver is expecting the packet with sequence number 7
Send Window
The send window is an abstract concept defining an imaginary
box of maximum size = (2^m) − 1 with three variables: Sf, Sn, and Ssize
The send window can slide one or more slots when an error-free ACK with ackNo greater than or equal to Sf and less than Sn (in modular arithmetic) arrives
Receive Window
The receive window makes sure that the correct data packets are received and that the correct acknowledgments are sent
In Go-Back-N, the size of the receive window is always 1
-
-
We need only one variable, Rn (receive window, next packet
expected), to define this abstraction
The sequence numbers to the left of the window belong to the packets already received and acknowledged
-
-
-
The receive window also slides, but only one slot at a time
When a correct packet is received, the window slides, Rn = (Rn + 1) modulo 2^m
Timers
Although there can be a timer for each packet that is sent, in our protocol we use only one
-
Resending packets
When the timer expires, the sender resends all outstanding packets i.e it will send all the packets in the current window
-
-
Send Window Size
-
If all acks are lost the retransmission of 0 will be considered as the next valid packet instead of duplicate
-
TCP
-
A socket is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}
Sequence Number
The sequence number of the first segment is the ISN (initial sequence number), which is a random number
The sequence number of any other segment is the sequence number of the previous segment plus the number of bytes (real or imaginary) carried by the previous segment
Acknowledgment Number
The value of the acknowledgment field in a segment defines the number of the next byte a party expects to receive. The acknowledgment number is cumulative.
Segment
-
Format
-
The segment consists of a header of 20 to 60 bytes, followed by data from the application program
-
Fields
Source port address
- 1 more item...
Destination port address
- 1 more item...
Sequence number
- 4 more items...
Acknowledgment number
- 3 more items...
Header length
- 3 more items...
-
Window Size
- 4 more items...
-
Urgent Pointer
- 2 more items...
-
States
-
-
-
-
-
-
-
CLOSE-WAIT
First FIN received, ACK sent; waiting for application to close
TIME-WAIT
Second FIN received, ACK sent; waiting for 2MSL time-out
-
-
Client
Open
The client process issues an active open command to its TCP to request a connection to a specific socket address
-
After receiving the SYN + ACK segment, TCP sends an ACK segment and goes to the ESTABLISHED state
Data are transferred, possibly in both directions, and acknowledged
Close
When it receives the ACK segment, it goes to the FIN-WAIT-2 state
-
When the client process has no more data to send, it issues a command called an active close
-
When the corresponding timer expires, the client goes to the CLOSED state
When the client receives a FIN segment, it sends an ACK segment and goes to the TIME-WAIT state
Server
Open
-
-
The TCP then sends a SYN + ACK segment and goes to the SYN-RCVD state, waiting for the client to send an ACK segment
After receiving the ACK segment, TCP goes to the ESTABLISHED state, where data transfer can take place
TCP remains in this state until it receives a FIN segment from the client signifying that there are no more data to be exchanged and the connection can be closed
Close
The server, upon receiving the FIN segment, sends all queued data to the server with a virtual EOF marker, which means that the connection must be closed
It sends an ACK segment and goes to the CLOSEWAIT state, but postpones acknowledging the FIN segment received from the client until it receives a passive close command from its process
After receiving the passive close command, the server sends a FIN segment to the client and goes to the LASTACK state, waiting for the final ACK
When the ACK segment is received from the client, the server goes to the CLOSE state
Flow Control
-
-
Sender Side
-
Maximum Window = Min(Advertised Window, Congestion Window)
-
-
-
-
This means that part of the allocated buffer at the receiver may be occupied by bytes that have been received and acknowledged, but are waiting to be pulled by the receiving process
The receive window size determines the number of bytes that the receive window can accept from the sender before being overwhelmed (flow control)
-
Normally ACK would mean that the sliding window shifts, but if the process does not consume the data, it will reduce the rwnd to ensure that even after sliding, the updated window will not overwhelm the reveiver
-
Shrinking of Windows
-
The send window, on the other hand, can shrink if the receiver defines a value for rwnd that results in shrinking the window
-
-
Congestion Control
Congestion Window
To control the number of segments to transmit, TCP uses another variable called a congestion window, cwnd
-
Actual window size = minimum (rwnd, cwnd)
Congestion Detection
Scenarios
If a TCP sender does not receive an ACK for a segment or
a group of segments before the time-out occurs, it assumes that the corresponding segment or segments are lost and the loss is due to congestion
-
Slow Start
Slow start is part of the congestion control strategy used by TCP in conjunction with other algorithms to avoid sending more data than the network is capable of forwarding, that is, to avoid causing network congestion
Although the strategy is referred to as slow start, its congestion window growth is quite aggressive, more aggressive than the congestion avoidance phase
Slow start begins initially with a congestion window size (CWND) of 1, 2, 4 or 10 MSS
The value for the congestion window size will be increased by one with each acknowledgement (ACK) received, effectively doubling the window size each round-trip time
The transmission rate will be increased by the slow-start algorithm until either a loss is detected, or the receiver's advertised window (rwnd) is the limiting factor, or ssthresh is reached
-
After threshold is reached, transmission rate is increased linearly, 1 for every RTT
AIMD
-
AIMD combines linear growth of the congestion window with an exponential reduction when a congestion takes place
Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link
Fast Retransmit
Fast retransmit is an enhancement to TCP that reduces the time a sender waits before retransmitting a lost segment
-
If an acknowledgement is not received for a particular segment within a specified time (a function of the estimated round-trip delay time), the sender will assume the segment was lost in the network, and will retransmit the segment
-
When a sender receives three duplicate acknowledgements, it can be reasonably confident that the segment carrying the data that followed the last in-order byte specified in the acknowledgment was lost
A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout
On receipt of the retransmitted segment, the receiver can acknowledge the last in-order byte of data received
Port Numbers
Ranges
Well-Known
- 2 more items...
Registered
- 3 more items...
Ephemeral
- 2 more items...
-
-
UDP
The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol
It does not add anything to the services of IP except for providing process-to-process communication instead of host-to-host communication
User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes made of four fields, each of 2 bytes (16 bits)
-
The third field defines the total length of the user datagram, header plus data
The 16 bits can define a total
length of 0 to 65,535 bytes
The total length needs to be less because a UDP user datagram is stored in an IP datagram with the total length of 65,535 bytes
-
-
-
-
If the checksum does not include the pseudoheader, a user datagram may arrive safe and sound
If the IP header is corrupted, it may be delivered to the wrong host
-
-
-
Incoming
-
If two process are receiving on the same dest id/port, both will receive incoming packets
-
-
Application Layer
An application layer protocol defines how application processes (clients and servers), running on different end systems pass messages to each other
-
Protocols
SMTP
Overview
-
It works closely with something called the Mail Transfer Agent (MTA) to send your communication to the right computer and email inbox
A MTA (like a mail relay) checks the domain’s MX record, a resource record in the DNS to decide how to continue transferring the message either to another MTA or MDA (Mail Delivery Agent)
Using a process called “store and forward,” SMTP moves your email on and across networks
A MDA (Mail Delivery Agent) is a software that stores messages for batch retrieval on the MUA (Mail User Agent ex- apple mail, outlook express) side
SMTP is mainly focused with the push protocol in order to send emails from one server to another, usually the mail server
Since it is usually limited in its ability to queue messages on the receiving end, other protocols like POP3 (handles one server but many emails) and IMAP (handles multi-server or multiple devices) are used to retrieve and handle these emails on the MUA (email client: apple mail, outlook, web mail sites like gmail and yahoo) from the server mailbox
Steps
-
The message is transferred to the server’s MTA (the MTA and MSA are usually hosted on the same SMTP server)
The MTA checks the MX record of the recipient domain and transfers the message to another MTA (this step can occur multiple times until the message is received by the proper receiving server)
The message is handed off to the MDA, which saves messages in the proper format for retrieval by the receiving MUA
-
-
TCP port 25 is the recommended port number for SMTP communications between mail servers (i.e., for relaying messages)
-
-
-
-
Protocol
-
-
-
-
-
Assuming the server is OK, client sends the mail to its mail server
-
Then, SMTP transfers the mail from sender’s mail server to the receiver’s mail server
While sending the mail,
SMTP is used two times
-
-
-
-
-
SMTP uses persistent TCP connections, so it can send multiple emails at once
-
-
-
-
SMTP can not transfer other types of data like images, video, audio etc
-
SMTP can not transfer the text data of other languages like French, Japanese, Chinese etc.
-
It extends the limited capabilities of email by enabling the users to send and receive graphics, audio files, video files etc in the message.
-
At receiver’s side, a pull protocol like POP3, IMAP is needed.
-
-
FTP
-
-
-
-
-
-
-
-
-
-
FTP is used for transferring one file at a time in either direction between the client and the server.
-
DNS
-
-
-
It converts the names we type in our web browser address bar to the IP Address of web servers hosting those sites.
Need
So, a mapping is required which maps the domain names to the IP Addresses of their web servers.
-
-
So, it is difficult to remember IP Addresses directly while it is easy to remember names.
-
Resolution Steps
-
-
If a match is found, it sends the corresponding IP Address back.
If no match is found, it sends a query to the local DNS server.
-
After receiving a response, the DNS client returns the resolution result to the application.
-
-
-
-
-
-
Also, it can translate an IP Address onto a domain name.
For the first time, there is more delay in translating the domain name onto an IP Address.
-
-
-
So, there is more delay for the first time.
To reduce the delay next time, IP Addresses are stored in the computer using log.
-
When it gets expired, the request is again served through DNS.
-
-
-
Synchronous
For synchronous data transfer, both the sender and receiver access the data according to the same clock
Therefore, a special line for the clock signal is required
A master (or one of the senders) should provide the clock signal to all the receivers in the synchronous data transfer
Asynchronous
For asynchronous data transfer, there is no common clock signal between the sender and receivers
Therefore, the sender and the receiver first need to agree on a data transfer speed
-
Both the sender and receiver set up their own internal circuits to make sure that the data accessing is follows that agreement
Computer clocks also differ in accuracy. Although the difference is very small, it can accumulate fast and eventually cause errors in data transfer
This problem is solved by adding synchronization bits at the front, middle or end of the data
Since the synchronization is done periodically, the receiver can correct the clock accumulation error
-
Sending these extra synchronization bits may account for up to 50% data transfer overhead and hence slows down the actual data transfer rate
-
-