How many main function can be used in the C language program?

 How many main function can be used in the C language program?

You can use only one main function in C language program .We can use a lot of functions in c programs but the main function is only one.


when we divide large program into small-small different parts using functions , it becomes easy for us to manage the big program . We do this to manage sub programs individually and It helps us to find or resolve error easily.






Data Communication and Computer Network ( MCA(IV)/042/Assignment/2020-21 )

Q1. (a) Define Modulation . What are its advantages?


Ans. Modulation is the process of converting data into radio waves by adding information to an electronic or optical carrier


signal. A carrier signal is one with a steady waveform -- constant height, or amplitude, and frequency. Information can be


added to the carrier by varying its amplitude, frequency, phase, polarization -- for optical signals -- and even quantum-level


phenomena like spin.


Modulation is usually applied to electromagnetic signals: radio waves, lasers/optics and computer networks. Modulation can


even be applied to a direct current -- which can be treated as a degenerate carrier wave with a fixed amplitude and


frequency of 0 Hz -- mainly by turning it on and off, as in Morse code telegraphy or a digital current loop interface. The


special case of no carrier -- a response message indicating an attached device is no longer connected to aremote system -- is


called baseband modulation.


Advantages of Modulation

The antenna used for transmission, had to be very large, if modulation was not introduced, The range of communication

gets limited as the wave cannot travel to a distance without getting distorted.

Following are some of the advantages for implementing modulation in the communication systems.

® Avoids mixing of signals - This is a point from the practical side of things. Suppose you are transmitting the

baseband signal as it is to a receiver, say your friends phone. Just like you , there will be thousands of people of

people in the city using their mobile phones.

There is no way to tell such signals apart and they will interfere with each other leading to a lot of noise in the

system and a very bad output.

By using a carrier wave of high frequencies and allotting a band of frequencies to each message, there is no mixing up

of signals and the received signals are absolutely perfect.

* Reduction in the height of antenna For the transmission of radio signals, the antenna height must be multiple of

A/4 ,where A is the wavelength .

® Increase the range of communication - By using modulation to transmit the signals through space to long distances,

we have removed the need for wires in the communication systems. The technique of modulation helped humans to

become wireless.

*® Multiplexing is possible - Multiplexing is a process in which two or more signals can be transmitted over the same

communication channel simultaneously. This is possible only with modulation.

© Improves quality of reception - With frequency modulation (FM), and the digital communication techniques like

PCM, the effect of noise is reduced to a great extent. This improves quality of reception.


(b) What is meant by QAM? How is it different from PSK? Draw constellation diagrams of 8-PSK and 16 QAM .


 


Ignou Study Helper-Sunil Poonia Page 1

fgef Toslou Stuer HeLree

www.ignousite.com ese SUNIL POONIA


Ans. Quadrature Amplitude Modulation, QAM is a signal in which two carriers shifted in phase by 90 degrees (i.e. sine and

cosine) are modulated and combined. As a result of their 90° phase difference they are in quadrature and this gives rise to

the name. Often one signal is called the In-phase or “I” signal, and the other is the quadrature or “Q” signal.


The resultant overall signal consisting of the combination of both | and Q carriers contains of both amplitude and phase

variations. In view of the fact that both amplitude and phase variations are present it may also be considered as a mixture of

amplitude and phase modulation.


A motivation for the use of quadrature amplitude modulation comes from the fact that a straight amplitude modulated

signal, i.e. double sideband even with a suppressed carrier occupies twice the bandwidth of the modulating signal. This is

very wasteful of the available frequency spectrum. QAM restores the balance by placing two independent double sideband

suppressed carrier signals in the same spectrum as one ordinary double sideband supressed carrier signal.


Different from PSK: Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special case of

QAM, where the amplitude of the transmitted signal is a constant, but its phase varies. This can also be extended


to frequency modulation (FM) and frequency-shift keying (FSK), for these can be regarded as a special case of phase

modulation.


QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards.

Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the

noise level and linearity of the communications channel. QAM is being used in optical fiber systems as bit rates increase;

QAM16 and QAM64 can be optically emulated with a 3-path interferometer,


 


 


 


 


 


 


 


Draw constellation diagrams of 8-PSK and 16 QAM a

Tribit) Phase 010

oor 0 oll 001

O01) 45

gi? 90 ® @e|/o

135 eo ele oe

1Or ts - = e eo eo e :

10 270

] 1 | 3 1 5 101 1 1 1 o o e e

‘bi 110

oo Constellation diagram

8-PSK 16 QAM


Q2 (a) How is Shannon’ theorem different from Nyquist’s theorem? What is the channel capacity for bandwidth of 3 KHz

and signal to noise ratio 30 dB. Show the calculation.

Ans. Hannon’s Sampling theorem states that a digital waveform must be updated at least twice as fast as the bandwidth of

the signal to be accurately generated. The same image that was used for the Nyquist example can be used to demonstrate

Shannon’s Sampling theorem. The following figure shows a desired 5 MHz sine wave generated by a 6 MS/s DAC. The dotted

line represents the desired waveform, and the arrows represent the digitized samples that are available to recreate the

continuous time 5 MHz sine wave. The solid line indicates the signal that would be seen, for example, with an oscilloscope at

the output of a DAC.


 


Ignou Study Helper-Sunil Poonia Page 2

stun


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


 


 


In this case, the high-frequency sine wave is the desired signal, but was severely undersampled by only being generated by a

6 MS/s DAC; the actual resulting waveform is a 1 MHz signal.


In systems where you want to generate accurate signals using sampled data, you must set the sampling rate high enough to

prevent aliasing.


The Nyquist theorem states that an analog signal must be sampled at Jeast twice as fast as the bandwidth of the signal to

accurately reconstruct the waveform; otherwise, the high-frequency content creates an alias at a frequency inside the

spectrum of interest (passband). An alias is a false lower frequency component that appears in sampled data acquired at too

low a sampling rate. The following figure shows a 5 MHz sine wave digitized by a 6 MS/s analog-to-digital converter (ADC). In

this figure, the dotted line represents the sine wave being digitized, while the solid line represents the aliased signal

recorded by the ADC at that sample rate.


       

    


or seer eterna.

aacereeess-7*"


  


   


The 5 MHz frequency aliases back in the passband, falsely appearing as a 1 MHz sine wave.


The channel capacity for bandwidth of 3 KHz and signal to noise ratio 30 dB


C = 3000 * log2(1001)


which is a little less than 30 kbps.


Satellite TV Channel


For a satellite TV channel with a signal-to noise ratio of 20 dB and a video bandwidth of 10MHz, we get a maximum data rate

of:


C=10000000 * log2(101)


which is about 66 Mbps.


(b) What is the need of bit stuffing? A bit string 0111111000011111110001110 needs to be transmitted at the data link

layer. What is the string actually transmitted after bit stuffing ? How is bit stuffing implemented in HDLC?


Ans. Bit stuffing is the process of inserting noninformation bits into data to break up bit patterns to affect the synchronous


transmission of information. It is widely used in network and communication protocols, in which bit stuffing is a required


 


Ignou Study Helper-Sunil Poonia Page 3

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


part of the transmission process. Bit stuffing is commonly used to bring bit streams up to a common transmission rate or to

fill frames. Bit stuffing is also used for run-length limited coding.


The bit stuffing concept goes like this


* ach frame begins and ends with a special bit pattern called a flag byte 01111110.

* Whenever sender data link layer encounters five consecutive ones in the data stream, it automatically stuffs a 0 bit

into the outgoing stream.

* Alternatively when receiver sees five consecutive incoming ones followed by a 0 bit, it automatically destuffs the 0

bit before sending the data to the network layer.

Now, moving towards your question we have

Data set to be transmitted 0111111000011111110001110


Separate set whenever encounter 5 consecutive ones 0111111000011111110001110


Data set after bit-stuffing 0111111000011111110001110


So 0111111000011111110001110 is the final string after bit-stuffing but the frame to be transmitted has special bit pattern

as well Therefore the frame is


011111110 0111101111100111110110 011111110


Bit stuffing implemented in HDLC : Consider the data field X of an HDLC frame of arbitrary length N where the bit sequence

X = {x1, x2,x3,..., xN} is uniformly random and where a 0-bit is inserted after n consecutive 1-bits. If we segment the

sequence X into subsequences of n consecutive bits, there will be N—n+ 1 such subsequences. The probability that any

such subsequence will be all 1-bits is 2” . However, the act of bit stuffing affects the number of stuffable subsequences. This

is because stuffable

subsequences are highly correlated. If the subsequence xk, xk+1, xk+2,..., xk+n-1 is all 1-bits the probability that the

subsequence xk+1, xk+2,..., xk+n-1, xk+n is also all 1-bits is considerably greater than 2-n. In fact, since the sequence

is uniformly random, then xk+n will be a 1-bit with a probability of 0.5. The probability that the next subsequence is also all

1-bits is 0.25 and soon,

Consequently, whenever bit stuffing occurs, an additional number of subsequences of n 1-bits may belost. The expected

number of subsequences lost will be: number of subsequences lost will be:

number of subsequences lost will be:


71


Past.

fe L


ial =

So bit stuffing reduces the number of stuffable subsequences by a factor of 1 + L. That is, the rate at which bits will be

stuffed is:


a-n

(1+Z)

Note that this is independent of the length of the sequence N.

We can make the useful observation that L has an upper bound of 1 approached as n becomes large. That is, if we denote

the bit stuffing rate for large n by Re, then


 


Ignou Study Helper-Sunil Poonia Page 4

qTuny


fe=a's Tondv Stuy HELPER


 


 


=~:

www.ignousite.com ese SUNIL POONIA

a"

~ (1+Z)

a"

- n-l 1 \

| 1+ a

im - /

oak


We now illustrate this with an example. In HDLC nis 5. If the subsequence X\41, Xe) Xke3r Xeea, Xkeg (5 all 1-bits the probability

that the subsequence %,42, Xag, Xkaae Xies, Xkve, (S also all 1-bits is 0.5. However, after bit stuffing, this subsequence will no

longer be all 1-bits. By calculating the expectation of the number of all 1-bit subsequences lost (L) we see that:


L=0.5+ 0.25 + 0.125 + 0.0625 = 0.937.


Without stuffing we would expect subsequences consisting of all 1-bits to occur at the rate of 2” = 0.0313. However, the

action of bit stuffing reduces the number of stuffable subsequences. Every time we stuff a subsequence we lose (on average)

0.937 additional stuffable subsequences. Consequently the rate at which we bit stuff a random subsequence will be


4-5


———_=0.0161.

(1.937)


We now test this and other cases with simulation.


Q3. Define checksum and write the algorithm for computing the checksum? Given a sequence frame of 10 bits:

1001100101 and a divisor (polynomial) of 1011. Find the CRC

Ans. A checksum is a value which is computed which allows you to check the validity of something. Typically, checksums are

used in data transmission contexts to detect if the data has been transmitted successfully.

Checksums take on various forms, depending upon the nature of the transmission and the needed reliability. For example,

the simplest checksum is to sum up all the bytes of a transmission, computing the sum in an 8-bit counter. This value is

appended as the last byte of the transmission. The idea is that upon receipt of n bytes, you sum up the first n-1 bytes, and

see if the answer is the same as the last byte. Since this is a bit awkward, a variant on this theme is to, on transmission, sum

up all the bytes, the (treating the byte as a signed, 8-bit value) negate the checksum byte before transmitting it. This means

that the sum of all n bytes should be 0. These techniques are not terribly reliable; for example, if the packet is known to be

64 bits in length, and you receive 64 '\0' bytes, the sum is 0, so the result must be correct. Of course, if there is a hardware

failure that simply fails to transmit the data bytes (particularly easy on synchronous transmission, where no "start bit" is

involved), then the fact that you receive a packet of 64 0 bytes with a checksum result of 0 is misleading; you think you've

received a valid packet and you've received nothing at all. A solution to this is to do something like negate the checksum

value computed, subtract 1 from it, and expect that the result of the receiver's checksum of the n bytes is OxFF (-1, asa

signed 8-bit value). This means that the O-lossage problem goes away.


Algorithm for computing the checksum:


 


Ignou Study Helper-Sunil Poonia Page 5

fees Toudy Stuny HELPee

www.ignousite.com ese SUNIL POONIA


class checksum {

public:

checksum() { clear(); }

void clear() { sum = 0; r= 55665; cl = 52845; c2 = 22719;}

void add(DWORD w);

void add(BOOL w) { add((DWORD)w); }

void add(UINT w) { add((DWORD)w); }

void add(WORD w);

void add(const CString & s);

void add(LPBYTE b, UINT length);

void add(BYTE b);

DWORD get() { return sum; }

protected:

WORD r;

WORD cl;

WORD c2;

DWORD sum;


The CRC or the cyclic redundancy check is a checksum algorithm that detects the inconsistency of data, that is the bit errors

during the data transmission.

As per the question -


The sequence after adding n = 3 extra zeros - 1001100101000

Divisor (of length n+ 1)- 1011


The polynomial division will be -


= x12 +x104+ x7 + x6 4+ x5 + x4/ xt4x41

=x9+x6 40° 4x $14 1/xexe1


=1


The remainder polynomial will be 1.

Hence, the CRC will be 001.


Q4. (a) What are the limitations of MACA? How are these limitations overcome in MACAW? Explain.

Ans. Limitations of MACA:


®  Backoff algorithm: MACAW replaces BEB with MILD (multiplicative increase and linear decrease) to ensure that

backoff interval grows a bit slowly (1.5x instead of 2x) and shrinks really slowly (linearly to minimum value). To

enable better congestion detection, MACAW shares backoff timers among stations by putting this info in headers.


*® Multiple stream model: MACAW uses separate queues for each stream in each node for increased fairness. In

addition, each queue runs independent backoff algorithms. However, all stations attempting to communicate with

the same receiver should use the same backoff value.


e Basic exchange: MACAW replaces RTS-CTS-DATA to RTS-CTS-DS-DATA-ACK with the following extensions:


 


Ignou Study Helper-Sunil Poonia Page 6

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


1. ACK: An extra ACK at the end ensures that errors can be recovered in the link layer, which is much faster than

transport layer recovery. If an ACK is lost, next RTS can generate another ACK for the previous transmission.


2. DS: This signal ensures a 3-way handshake between sender and receiver (similar to TCP) so that everyone within

hearing distance of the two stations know that a data transmission is about to happen. Without the DS packet,

stations vying for the shared media cannot compete properly and one is always starved due to the lack of its

knowledge of the contention period. In short, DS enables synchronization.


3. RRTS: RRTS is basically a proxy RTS, when the actual RTS sender is too far away to fight for the contention slot.

However, there is one scenario where even RRTS cannot guarantee fair contention.


4. Multicast: Multicast is handled by sending data right away after the RTS packet, without waiting for CTS. It suffers

from the same problems as in CSMA, but the authors leave it as. an open challenge.


* Evaluation: This paper presents simulation-based evaluation of MACAW, which shows that MACAW is fairer and

gives higher throughput than MACA.


Limitations overcome in MACAW: Multiple Access with Collision Avoidance (MACA) is a slotted media access control

protocol used in wireless LAN data transmission to avoid collisions caused by the hidden station problem and to simplify

exposed station problem.

The basic idea of MACA is a wireless network node makes an announcement before it sends the data frame to inform other

nodes to keep silent. When a node wants to transmit, it sends a signal called Request-To-Send (RTS) with the length of the

data frame to send. If the receiver allows the transmission, it replies the sender a signal called Clear-To-Send (CTS) with the

length of the frame that is about to receive.

Let us consider that a transmitting station A has data frame to send to a receiving station B. The operation works as follows :

* Station A sends a RTS frame to the receiving station.

® On receiving the RTS, station B replies by sending.a CTS frame.

* On receipt of CTS frame, station A begins transmitting its data frame.


Neighboring Station Neighboring Station

within range of Transmitting Receiving within range of

Station A Station A Station B Station B


 


©) oe KP) « »)) - ())

= By = Te ©


Any node that receives CTS frame knows that itis close to the receiver, therefore, cannot transmit a frame.


Any node that receives RTS frame but not the CTS frame knows that is not close to the receiver to interfere with it, So it is

free to transmit data.


WLAN data transmission collisions may still occur, and the MACA for Wireless (MACAW) is introduced to extend the function

of MACA. It requires nodes sending acknowledgements after each successful frame transmission, as well as the additional

function of Carrier Sense.


 


Ignou Study Helper-Sunil Poonia Page 7

Fey Towou Stvpy HeLPen


se e

www.ignousite.com Sr SUMIL POONA

Neighboring Station Transmitting Receiving Neighboring Station

within range of Station A Station B within range of

Station A RTS Station B


()) «»)) —=— € ))) 8... @))

Sa


RTS — Request-To-Send frame

CTS —+* Clear-To-Send frame

DS —~ Data Sending frame


ACK — Acknowledgment frame


(b) How does CSMA/CD work? Write all the steps. Explain the binary exponential backoff algorithm in case of collision.

How to calculate backoff time?

Ans. CSMA/CD work; CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media-access control method that

was widely used in Early Ethernet technology/LANs, When there used to be shared Bus Topology and each Nodes(

Computers) were connected By Coaxial Cables.Now a Days Ethernet is Full Duplex and CSMA/CD is not used as Topology is

either Star (connected via Switch or Router) or Point to Point ( Direct Connection) but they are still supported though.

Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data through that channel. In this

case all 'n’ stations would want to. access the link/channel to transfer their own data.Problem arises when more than one

station transmits the data at the moment. In this ‘case, there will be collisions in the data from different stations.

CSMA/CD is one such technique where different stations that follow this protocol agree on some terms and collision

detection measures for effective transmission. This protocol decides which station will transmit when so that data reaches

the destination without corruption.

How CSMA/CD works?


*® Step 1: Check if the sender is ready for transmitting data packets.


* Step 2: Check if the transmission link is idle?

Sender has to keep on checking if the transmission link/medium is idle. For this it continuously senses transmissions

from other nodes. Sender sends dummy data on the link. If it does not receive any collision signal, this means the

link is idle at the moment. If it senses that the carrier is free and there are no collisions, it sends the data. Otherwise

it refrains from sending data.


® Step 3: Transmit the data & check for collisions.

Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system. It checks for the successful

and unsuccessful transmissions through collision signals. During transmission, if collision signal is received by the

node, transmission is stopped. The station then transmits a jam signal onto the link and waits for random time

interval before it resends the frame. After some random time, it again attempts to transfer the data and repeats

above process.


® Step 4: If no collision was detected in propagation, the sender completes its frame transmission and resets the

counters.


 


Ignou Study Helper-Sunil Poonia Page 8

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


Binary Exponential Backoff Algorithm in case of Collision


Step 1:- The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that

all the other stations detect collision.


Step 2:- The station increments the retransmission counter, c, that denote the number of collisions.


Step 3:- The station selects a random number of slot times in the range O and 2c — 1. For example, after the first collision (i.e.

c = 1), the station will wait for either 0 or 1 slot times. After the second collision (i.e. c = 2), the station will wait anything

between 0 to 3 slot times. After the third collision (i.e. c = 3), the station will wait anything between 0 to 7 slot times, and so

forth.


Step 4:- If the station selects a number k inthe range 0 and 2c — 1, then


Back_off_time =k = Time slot,


where a time slot is equal to round trip time (RTT).


Step 5:- And the end of the backoff time, the station attempts retransmission by continuing with the CSMA/CD algorithm.

Step 6:- If the maximum number of retransmission attempts is reached, then the station aborts transmission.


Back Off Time

In CSMA / CD protocol,


e After the occurrence of collision, station waits for some random back off time and then retransmits.

* This waiting time for which the station waits before retransmitting the data is called as back off time.

® Back Off Algorithm is used for calculating the back off time.


Back off time = k x Time slot

where value of one time slot = 1 RTT


QS5. Explain the operation of the distance vector routing protocol. Whatisithe reason of count to infinity problem in

distance vector routing protocol?. How isthe above problem overcome in link state routing algorithm?


Ans. A distance-vector routing (DVR) protocol requires that a router inform its neighbors of topology changes periodically.


Historically known as the old ARPANET routing algorithm (or known as Bellman-Ford algorithm).


Bellman Ford Basics — Each router maintains a Distance Vector table containing the distance between itself and ALL possible


destination nodes. Distances,based on a chosen metric, are computed using information from the neighbors’ distance


vectors.


Count to infinity problem in distance vector routing protocol: The main issue with Distance Vector Routing (DVR) protocols


is Routing Loops, since Bellman-Ford Algorithm cannot prevent loops. This routing loop in DVR network causes Count to


Infinity Problem. Routing loops usually occur when any interface goes down or two-routers send updates at the same time.


Counting to infinity problem:


 


Ignou Study Helper-Sunil Poonia Page 9

stun


fee's Touou Stvpy HELPER

www.ignousite.com eed SUNIL POONIA


és --—--—----&


So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries for each other. B will

know that it can get to C at a cost of 1, and A. will know that it can get to C via B ata cost of 2.


(e-2 e-e


If the link between B and C is disconnected, then B will know that it can no longer get to C via that link and will remove it

from it’s table, Before it can send any updates it’s possible that it will receive an update from A which will be advertising that

it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a route to C via A at a cost of 3. A will then receive

updates from. B later and update its cost to 4. They will then go on feeding each other bad information toward infinity which

is called as Count to Infinity problem.


Link State Routing :


® = Itis a dynamic routing algorithm in which each router shares knowledge of its neighbors with every other router in

the network.


*® Arouter sends its information about its neighbors only to all the routers through flooding.


® Information sharing takes place only whenever there is a change.


® It makes use of Dijkastra’s Algorithm for making routing tables.


* Problems — Heavy traffic due to.flooding of packets.

— Flooding can result in infinite looping which can be solved by using Time to live (TTL) field.


Q6. (a) Define and differentiate between flow control and congestion control mechanisms.interms of where they are

applied in a packet switched network. What are the two categories of congestion contro] mechanisms and what

policies are adopted by each category to control congestion?


Ans.


Both Flow Control and Congestion Control are the traffic controlling methods in different situations.


The main difference between flow control and:congestion control is that, In flow control, Traffics are controlled which are


flow from sender to a receiver. On the other hand, In congestion control, Traffics are controlled entering to the network.


Let's see the difference between flow control and congestion control:


 


Ignou Study Helper-Sunil Poonia Page 10

yes Toney Stvpy Heres

‘Ned SUNIL POONA


www.ignousite.com


 


 


 


 


 


 


i: In flow control, Traffics are controlled which are | In this, Traffics are controlled entering to the network.

flow from sender to a receiver.


2. Data link layer and Transport layer handle it. Network layer and Transport layer handle it.


35 In this, Receiver’s data is prevented from being In this, Network is prevented from congestion.

overwhelmed.


4. In flow control, Only sender is responsible for In this, Transport layer is responsible for the traffic.

the traffic.


5. In this, Traffic is prevented by slowly sending by | In this, Traffic is prevented by slowly transmitting by

the sender. the transport layer.


6. In flow control, buffer overrun is restrained in In congestion control, buffer overrun is. restrained in

the receiver. the intermediate systems in the network.


 


 


 


 


Flow control and congestion control mechanisms applied in a packet switched network :


Congestion control is needed when buffers in packet switches overflow or congest. Flow control is needed when the buffers

at the receiver are not depleted as fast as the data arrives. Flow control can be done ona link-by-link basis or end-to-end

basis. If there is a queue on the input side of a switch, and link-by-link flow control is used, then the switch tells its

immediate neighbor to slow down if the input queue fills up as a "flow control" action. If viewed as a "congested buffer,"

then the switch tells the source of the data stream to slow down using congestion control notifications. When output buffers

at a switch fill up and packets are dropped this leads to congestion control actions.


The two categories of congestion control mechanisms : Congestion control refers to the techniques used to control or

prevent congestion. Congestion control techniques can be broadly classified into two categories:


 


Congestion

Control Techniques


 


 


 


 


 


 


Open loop Congestion Closed loop.Congestion


Control Control


 


Open Loop Congestion Control:

Open loop congestion control policies are applied to prevent congestion before it happens. The congestion control is

handled either by the source or the destination.


 


Ignou Study Helper-Sunil Poonia Page 11

qTuny


fe=a's Tondv Stuy HELPER


‘=?

www.ignousite.com ese SUNIL POONIA


Policies adopted by open loop congestion control:


1.


Retransmission Policy :


It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or

corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network.


To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize

efficiency.


Window Policy :


The type of window at the sender side may also affect the congestion, Several packets in the Go-back-n window are

resent, although some packets may be received successfully at the receiver side. This duplication may increase the

congestion in the network and making it worse.


Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.

Discarding Policy:


A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time

partially discards the corrupted or less sensitive package and also able to maintain the quality of a message.


In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain

the quality of the audio file.


Acknowledgment Policy :


Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the

receiver may also affect congestion. Several approaches can be used to prevent congestion related to

acknowledgment.


The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet.

The receiver should send a acknowledgment only if it has to sent a packet or a timer expires.


Admission Policy :


In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the

resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is

a congestion.in the network, router should deny establishing a virtual network connection to prevent further

congestion.


All the above policies are adopted to prevent congestion before it happens in the network.


Closed Loop Congestion Control

Closed loop congestion control technique is used to treat or alleviate congestion after it happens. Several techniques are

used by different protocols; some of them are:


1.


Backpressure :


Backpressure is a technique in which a. congested node stop receiving packet from upstream node. This may cause

the upstream node or nodes to become congested and rejects receiving data from above nodes. Backpressure is a

node-to-node congestion control technique that propagate in the opposite direction of data flow. The backpressure

technique can be applied only to virtual circuit where each nade has information of its above upstream node.


 


Ignou Study Helper-Sunil Poonia Page 12

454% Toney Stvpy HELPER

www.ignousite.com Se? Sue POONIA


Backpressure Backpressure


<= <—


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


t 1 I 1

Source Congestion Destination


 


oo

Data Flow


In above diagram the 3rd node is congested and stops receiving packets as ‘a result 2nd node may be get congested due to

slowing down of the output data flow. Similarly Ist node may get congested and informs the source to slow down.


2. Choke Packet Technique :

Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet is a

packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the utilization

at each of its output lines. whenever the resource utilization exceeds the threshold value which is set by the

administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The

intermediate nodes through which the packets has traveled are not warned about congestion.


Choke Packet


t i 1 1

Source Congestion Destination


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


Data Flow


3. Implicit Signaling :

In implicit signaling, there is no communication between the congested nodes.and the source. The source guesses

that there is congestion in a network. For example when sender sends several packets and there is no

acknowledgment for a while, one assumption is that there is a congestion.


4. Explicit Signaling :

In explicit signaling, if anode experiences congestion it can explicitly sends a packet to the source or destination to

inform about congestion. The difference between choke packet and explicit signaling is that the signal is included in

the packets that carry data rather than creating different packet as in case of choke packet technique.

Explicit signaling can occur in either forward or backward direction.


* Forward Signaling : In forward signaling signal is sent in the direction of the congestion. The destination is warned

about congestion. The reciever in this case adopt policies to prevent further congestion.


 


Ignou Study Helper-Sunil Poonia Page 13

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


® Backward Signaling : In backward signaling signal is sent in the opposite direction of the congestion. The source is

warned about congestion and it needs to slow down.


(b) How is congestion controlled in TCP using slow start algorithm? Clearly show the window adjustment.


Ans. TCP uses a congestion window and a congestion policy that avoid congestion.Previously, we assumed that only receiver

can dictate the sender's window size. We ignored another entity here, the network. If the network cannot deliver the data

as fast as it is created by the sender, it must tell the sender to slow down. In other words, in addition to the receiver, the

network is a second entity that determines the size of the sender’s window.


Congestion policy in TCP:


1. Slow Start Phase: starts slowly increment is exponential to threshold

2. Congestion Avoidance Phase: After reaching the threshold increment is by 1

3. Congestion Detection Phase: Sender goes back to Slow start phase or Congestion avoidance phase.


Slow Start Phase : exponential increment — In this phase after every RTT the congestion window size increments

exponentially.


Initially cwnd = 1


After 1 RTT, cwnd = 24(1) =2


2 RTT, cwnd = 2°(2) =4


3 RTT, cwnd = 24(3)=8


Congestion Avoidance Phase : additive increment — This phase starts after the threshold value also denoted as ssthresh.

The size of cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.


Initially ewnd =i


After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2


3 RTT, cwnd =i+3


Congestion Detection Phase : multiplicative decrement — If congestion occurs, the congestion window size is decreased. The

only way a sender can guess that congestion has occurred is the need to retransmit a segment. Retransmission is needed to

recover a missing packet which is assumed to have been dropped by a router due to congestion. Retransmission can occur in

one of two cases: when the RTO timer times out or when three duplicate ACKs are received.


® Case 1: Retransmission due to Timeout —In this case congestion possibility is high.

a. ssthresh is reduced to half of the current window size.

b. set cwnd=1

c. start with slow start phase again.

e Case 2: Retransmission due to 3 Acknowledgement Duplicates — In this case congestion possibility is less.


a. ssthresh value reduces to half of the current window size.

b. set cwnd= ssthresh

c. start with congestion avoidance phase


 


Ignou Study Helper-Sunil Poonia Page 14

fe fs Towou Stuny HeLPee

www.ignousite.com eed SUNIL POONIA


Show the window adjustment: Assume a TCP protocol experiencing the behavior of slow start. At Sth transmission round

with a threshold (ssthresh) value of 32 goes into congestion avoidance phase and continues till 10th transmission. At 10th

transmission round, 3 duplicate ACKs are received by the receiver and enter into additive increase mode. Timeout occurs at

16th transmission round, Plot the transmission round (time) vs congestion window size of TCP segments.


  


 


 


Q7. What is the essential property of Fiestel Cipher network? Explain


Ans. Feistel Cipher model is a structure or a design used to develop many block ciphers such as DES. Feistel cipher may have

invertible, non-invertible and self invertible components in its design. Same encryption as well as decryption algorithm is

used. A separate key is used for each round. However same round keys are used for encryption as well as

decryption.


Encryption Process


The encryption process uses the Feistel structure consisting multiple rounds of processing of the plaintext, each round

consisting of a “substitution” step followed by a permutation step.


Feistel Structure is shown in the following illustration -


 


Ignou Study Helper-Sunil Poonia Page 15

faezs's Toby Spy HELPER


 


 


 


 


 


 


 


 


     

 


 


 


www.ignousite.com Se? Sue POONIA

Plaintext block

(Divide into two halves. L and R)

Round Keys

K

Round, 1

K,

Round ,

pl pR

Vv Ky,

Round, eo

FRR) <=


 


 


 


Ciphestext block


 


 


The input block to each round is divided into two halves that can be denoted as L and R for the left half and the right

half.


In each round, the right half of the block, R, goes through unchanged. But the left half, L, goes through an operation

that depends on R and the encryption key. First, we apply an encrypting function ‘f that takes two input - the key K

and R. The function produces the output f(R,K). Then, we XOR the output of the mathematical function with L.


In real implementation of the Feistel Cipher, such as DES, instead of using the whole encryption key during each

round, a round-dependent key (a subkey) is derived from the encryption key. This means that each round uses a

different key, although all these subkeys are related to the original key.


The permutation step at the end of each round swaps the modified L and unmodified R. Therefore, the L for the next

round would be R of the current round. And R for the next round be the output L of the current round.


Above substitution and permutation steps form a ‘round’. The number of rounds are specified by the algorithm

design.


 


Ignou Study Helper-Sunil Poonia Page 16

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


® Once the last round is completed then the two sub blocks, ‘R’ and ‘LU’ are concatenated in this order to form the

ciphertext block.


The difficult part of designing a Feistel Cipher is selection of round function ‘f’. In order to be unbreakable scheme, this

function needs to have several important properties that are beyond the scope of our discussion.


Decryption Process


The process of decryption in Feistel cipher is almost similar. Instead of starting with a block of plaintext, the ciphertext block

is fed into the start of the Feistel structure and then the process thereafter is exactly the same as described in the given

illustration.


The process is said to be almost similar and not exactly same. In the case of decryption, the only difference is that the

subkeys used in encryption are used in the reverse order.


The final swapping of ‘L’ and ‘R’ in last step of the Feistel Cipher is essential. If these are not swapped then the resulting

ciphertext could not be decrypted using the same algorithm.


Number of Rounds


The number of rounds used in a Feistel Cipher depends on desired security from the system. More number of rounds

provide more secure system. But at the same time, more rounds mean the inefficient slow encryption and decryption

processes. Number of rounds in the systems thus depend upon efficiency—security tradeoff.


Q8. What is the utility of a digital certificate? Where is it used? How are these signatures created? What are its

components?

Ans. A digital certificate, also known as a public key certificate, is used to cryptographically link ownership of a public key

with the entity that owns it. Digital certificates are for sharing public keys to be used for encryption and authentication.

Digital certificates include the public key being certified, identifying information about the entity that owns the public key,

metadata relating to the digital certificate and a digital signature of the public key created by the issuer of the certificate.

The distribution, authentication and revocation of digital certificates are the primary purposes of the public key

infrastructure (PKI), the system by which public keys are distributed and authenticated. Public key cryptography depends on

key pairs: one a private key to be held by the owner and used for signing and decrypting, and one a public key that can be

used for encryption of data sent to the public key owner or authentication of the certificate holder's signed data. The digital

certificate enables entities to share their public key in a way that can be authenticated.

Where is it used: Digital certificates are used in public key cryptography functions; they are most commonly used for

initializing secure SSL connections between web browsers and web servers. Digital certificates are also used for sharing keys

to be used for public key encryption and authentication of digital signatures.

Digital certificates are used by all major web browsers and web servers to provide assurance that published content has not

been modified by any unauthorized actors, and to share keys for encrypting and decrypting web content. Digital certificates

are also used in other contexts, both online and offline, for providing cryptographic assurance and privacy of data.


These signatures created:


The steps followed in creating digital signature are :


 


Ignou Study Helper-Sunil Poonia Page 17

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


1. Message digest is computed by applying hash function on the message and then message digest is encrypted using

private key of sender to form the digital signature. (digital signature = encryption (private key of sender, message

digest) and message digest = message digest algorithm(message)).


2. Digital signature is then transmitted with the message.(message + digital signature is transmitted)


3. Receiver decrypts the digital signature using the public key of sender.(This assures authenticity,as only sender has


his private key so only sender can encrypt using his private key which can thus be decrypted by sender’s public key).


The receiver now has the message digest.


The receiver can compute the message digest from the message (actual message is sent with the digital signature).


6. The message digest computed by receiver and the message digest (got by decryption on digital signature) need to be

same for ensuring integrity.


uP


Message digest is computed using one-way hash function, i.e. a hash function in which computation of hash value of a

message is easy but computation of the message from hash value of the message is very difficult.


Original

Data


Original

Data


 


 


 


Identical

Hashes:

Validate

Data

Integrity

One way . Digital Digital = ‘One way

Hash Signature Signature i 9) Hash

Public

Key


 


Its components: It has the following components


* Version: It is used to identify the version of X.509,


* Certificate serial number: it is a unique integer number that is generated by CA.


* Signature algorithm Identifier: it is used to identify the algorithm used by the CA at the time of signature.

* Issuer Name: it shows the name of the CA who issues a certificate


* Validity: It is used to show the validity of the certificate


* Subject Name: It shows the name of the user to whom the certificate belongs.


* Subject public key information: It contains the public key of the user and algorithm bused for the key.


Version 2

It has two additional fields


* Issuer unique identifier: It helps to find the CA uniquely if two or more CA have used the same issuer name.

© Subject unique identifier: It helps to find the user uniquely if two or more user has used the same name.


Version 3 : Version 3 contains many extensions of digital certificates.


 


Ignou Study Helper-Sunil Poonia Page 18

qTuny


fs Touou Stuny HeLree

Nut SUNIL POONIA


www.ignousite.com

Q9. Discuss the features of IPSec.


Ans. The IP security (IPSec) is an Internet Engineering Task Force (IETF) standard suite of protocols between 2

communication points across the IP network that provide data authentication, integrity, and confidentiality. It also defines

the encrypted, decrypted and authenticated packets. The protocols needed for secure key exchange and key management

are defined in it.


Uses of IP Security


IPsec can be used to do the following things:


*® To encrypt application layer data.


* To provide security for routers sending routing data across the public internet.


* To provide authentication without encryption, like to authenticate that the data originates from a known sender.


* To protect network data by setting up circuits using IPsec tunneling in which all data is being sent between the two

endpoints is encrypted, as with a Virtual Private Network(VPN) connection.


Components of IP Security

It has the following components:

1. Encapsulating Security Payload (ESP) —

It provides data integrity, encryption, authentication and anti replay. It also provides authentication for payload.

2. Authentication Header (AH) —

It also provides data integrity, authentication and anti replay and it does not provide encryption. The anti replay

protection, protects against unauthorized transmission. of packets. It does not protect data’s confidentiality.

3. Internet Key Exchange (IKE) —

It is a network security protocol designed to dynamically exchange encryption keys and find a way over Security

Association (SA) between 2 devices. The Security Association (SA) establishes shared security attributes between 2

network entities to support secure communication. The Key Management Protocol (ISAKMP) and Internet Security

Association which provides a framework for authentication and key exchange. ISAKMP tells how the set up of the

Security Associations (SAs) and how direct connections between two hosts that are using IPsec.


Internet Key Exchange (IKE) provides message content protection and also an open frame for implementing standard

algorithms such as SHA and MDS. The algorithm’s IP sec users produces a unique identifier for each packet. This identifier

then allows a device to determine whether a packet has been correct or not. Packets which are not authorized are discarded

and not given to receiver.


 


 


 


 


 


 


 


 


 


 


 


 


 


 


IP HDR Tce DATA Original Packet

IP ESP TEP Data ESP ESP

HDR HDR Trailer |Authent-

-ication

Mm Encryption _—,


a@£ — Authentication ———


 


Ignou Study Helper-Sunil Poonia Page 19

fests Tony Stvny HELree

www.ignousite.com ese SUNIL POONIA

Working of IP Security :


1. The host checks if the packet should be transmitted using IPsec or not. These packet traffic triggers the security

policy for themselves. This is done when the system sending the packet apply an appropriate encryption. The

incoming packets are also checked by the host that they are encrypted properly or not.


2. Then the IKE Phase 1 starts in which the 2 hosts( using IPsec ) authenticate themselves to each other to start a

secure channel. It has 2 modes. The Main mode which provides the greater security and the Aggressive mode which

enables the host to establish an IPsec circuit more quickly.


3. The channel created in the last step is then used to securely negotiate the way the IP circuit will encrypt data accross

the IP circuit.


4. Now, the IKE Phase 2 is conducted over the secure channel in which the two hosts negotiate the type of

cryptographic algorithms to use on the session and agreeing on secret keying material tobe used with those

algorithms.


5. Then the data is exchanged across the newly created IPsec encrypted tunnel. These packets are encrypted and

decrypted by the hosts using IPsec SAs.


6. When the communication between the hosts is completed or the session times out then the |Psec tunnel is

terminated by discarding the keys by both the hosts.


Q10. How is Silly Window Syndrome created by a receiver? What are the proposed solutions? Discuss.

Ans. Silly Window Syndrome is a problem that arises due to poor implementation of TCP. It degrades the TCP performance

and makes the data transmission extremely inefficient: The problem is called so because:


1, It causes the sender window size to shrink to a silly value.

2. The window size shrinks to such an-extent where the data being transmitted is smaller than TCP Header.


What are the causes?

The two major causes of this syndrome are as follows:


1. Sender window transmitting one byte of data repeatedly.

2. Receiver window accepting one byte of data repeatedly.


Cause-1: Sender window transmitting one byte of data repeatedly —


Suppose only one byte of data is generated by an application . The poor implementation of TCP leads to transmit this small

segment of data.Every time the application generates a byte of data, the window transmits. it. This makes the transmission

process slow and inefficient. The problem is solved by Nagle’s algorithm.


Nagle’s algorithm suggests:


1. Sender should send only the first byte on receiving one byte data from the application.

2. Sender should buffer all the rest bytes until the outstanding byte gets acknowledged.

3. In other words, sender should wait for 1 RTT(Round Trip Time).


After receiving the acknowledgement, sender should send the buffered data in one TCP segment. Then, sender should

buffer the data again until the previously sent data gets acknowledged.


Cause-2: Receiver window accepting one byte of data repeatedly —

Suppose consider the case when the receiver is unable to process all the incoming data.In such a case, the receiver will


 


Ignou Study Helper-Sunil Poonia Page 20

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


advertise a small window size.The process continues and the window size becomes smaller and smaller.A stage arrives when

it repeatedly advertises window size of 1 byte.This makes receiving process slow and inefficient. The solution to this problem

is Clark’s Solution.


Clark's solution suggests:


1. Receiver should not send a window update for 1 byte.

2. Receiver should wait until it has a decent amount of space available,

3. Receiver should then advertise that window size to the sender.


1. Nagle’s algorithm is turned off for those applications that require data to be immediately send. Nagle’s algorithm

can introduce delay as it sends only one data segment per round trip.

2. Both Nagle’s as well as Clark's algorithm can work together.Both are complementary.


Example:


A fast typist can do 100 words a minute and each word has an average of 6 characters. Demonstrate Nagle’s algorithm by

showing the sequence of TCP segment exchanges between a client with input from our fast typist and a server. Indicate how

many characters are contained in each segment sent from the client.Assume that the client and server are in the same LAN

and the RTT is 20 ms?


Nagle’s algorithm suggests:


Sender should wait for 1 RTT before sending the data. The amount of data received from the application layer in 1 RTT

should be sent to the receiver.


Amount of data accumulated in 1 RTT,


= (600 characters / 1 minute) x 20 msec


= (600 characters / 60 sec) x 20 msec


= (10 characters / 103 msec) x 20 msec


= 0.2 characters


From here, we observe:


Even if the sender waits for 1 RTT, not even a single character is produced. So, sender will have to wait till it receives at least

1 character. Then, sender sends it in one segment. Thus, one character will be sent per segment. Assuming the TCP header

length is 20 bytes, 41 bytes of data will be sent in each segment.


Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS

Theory Course at a student-friendly price and become industry ready.