How many main function can be used in the C language program?

 How many main function can be used in the C language program?

You can use only one main function in C language program .We can use a lot of functions in c programs but the main function is only one.


when we divide large program into small-small different parts using functions , it becomes easy for us to manage the big program . We do this to manage sub programs individually and It helps us to find or resolve error easily.






Data Communication and Computer Network ( MCA(IV)/042/Assignment/2020-21 )

Q1. (a) Define Modulation . What are its advantages?


Ans. Modulation is the process of converting data into radio waves by adding information to an electronic or optical carrier


signal. A carrier signal is one with a steady waveform -- constant height, or amplitude, and frequency. Information can be


added to the carrier by varying its amplitude, frequency, phase, polarization -- for optical signals -- and even quantum-level


phenomena like spin.


Modulation is usually applied to electromagnetic signals: radio waves, lasers/optics and computer networks. Modulation can


even be applied to a direct current -- which can be treated as a degenerate carrier wave with a fixed amplitude and


frequency of 0 Hz -- mainly by turning it on and off, as in Morse code telegraphy or a digital current loop interface. The


special case of no carrier -- a response message indicating an attached device is no longer connected to aremote system -- is


called baseband modulation.


Advantages of Modulation

The antenna used for transmission, had to be very large, if modulation was not introduced, The range of communication

gets limited as the wave cannot travel to a distance without getting distorted.

Following are some of the advantages for implementing modulation in the communication systems.

® Avoids mixing of signals - This is a point from the practical side of things. Suppose you are transmitting the

baseband signal as it is to a receiver, say your friends phone. Just like you , there will be thousands of people of

people in the city using their mobile phones.

There is no way to tell such signals apart and they will interfere with each other leading to a lot of noise in the

system and a very bad output.

By using a carrier wave of high frequencies and allotting a band of frequencies to each message, there is no mixing up

of signals and the received signals are absolutely perfect.

* Reduction in the height of antenna For the transmission of radio signals, the antenna height must be multiple of

A/4 ,where A is the wavelength .

® Increase the range of communication - By using modulation to transmit the signals through space to long distances,

we have removed the need for wires in the communication systems. The technique of modulation helped humans to

become wireless.

*® Multiplexing is possible - Multiplexing is a process in which two or more signals can be transmitted over the same

communication channel simultaneously. This is possible only with modulation.

© Improves quality of reception - With frequency modulation (FM), and the digital communication techniques like

PCM, the effect of noise is reduced to a great extent. This improves quality of reception.


(b) What is meant by QAM? How is it different from PSK? Draw constellation diagrams of 8-PSK and 16 QAM .


 


Ignou Study Helper-Sunil Poonia Page 1

fgef Toslou Stuer HeLree

www.ignousite.com ese SUNIL POONIA


Ans. Quadrature Amplitude Modulation, QAM is a signal in which two carriers shifted in phase by 90 degrees (i.e. sine and

cosine) are modulated and combined. As a result of their 90° phase difference they are in quadrature and this gives rise to

the name. Often one signal is called the In-phase or “I” signal, and the other is the quadrature or “Q” signal.


The resultant overall signal consisting of the combination of both | and Q carriers contains of both amplitude and phase

variations. In view of the fact that both amplitude and phase variations are present it may also be considered as a mixture of

amplitude and phase modulation.


A motivation for the use of quadrature amplitude modulation comes from the fact that a straight amplitude modulated

signal, i.e. double sideband even with a suppressed carrier occupies twice the bandwidth of the modulating signal. This is

very wasteful of the available frequency spectrum. QAM restores the balance by placing two independent double sideband

suppressed carrier signals in the same spectrum as one ordinary double sideband supressed carrier signal.


Different from PSK: Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special case of

QAM, where the amplitude of the transmitted signal is a constant, but its phase varies. This can also be extended


to frequency modulation (FM) and frequency-shift keying (FSK), for these can be regarded as a special case of phase

modulation.


QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards.

Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the

noise level and linearity of the communications channel. QAM is being used in optical fiber systems as bit rates increase;

QAM16 and QAM64 can be optically emulated with a 3-path interferometer,


 


 


 


 


 


 


 


Draw constellation diagrams of 8-PSK and 16 QAM a

Tribit) Phase 010

oor 0 oll 001

O01) 45

gi? 90 ® @e|/o

135 eo ele oe

1Or ts - = e eo eo e :

10 270

] 1 | 3 1 5 101 1 1 1 o o e e

‘bi 110

oo Constellation diagram

8-PSK 16 QAM


Q2 (a) How is Shannon’ theorem different from Nyquist’s theorem? What is the channel capacity for bandwidth of 3 KHz

and signal to noise ratio 30 dB. Show the calculation.

Ans. Hannon’s Sampling theorem states that a digital waveform must be updated at least twice as fast as the bandwidth of

the signal to be accurately generated. The same image that was used for the Nyquist example can be used to demonstrate

Shannon’s Sampling theorem. The following figure shows a desired 5 MHz sine wave generated by a 6 MS/s DAC. The dotted

line represents the desired waveform, and the arrows represent the digitized samples that are available to recreate the

continuous time 5 MHz sine wave. The solid line indicates the signal that would be seen, for example, with an oscilloscope at

the output of a DAC.


 


Ignou Study Helper-Sunil Poonia Page 2

stun


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


 


 


In this case, the high-frequency sine wave is the desired signal, but was severely undersampled by only being generated by a

6 MS/s DAC; the actual resulting waveform is a 1 MHz signal.


In systems where you want to generate accurate signals using sampled data, you must set the sampling rate high enough to

prevent aliasing.


The Nyquist theorem states that an analog signal must be sampled at Jeast twice as fast as the bandwidth of the signal to

accurately reconstruct the waveform; otherwise, the high-frequency content creates an alias at a frequency inside the

spectrum of interest (passband). An alias is a false lower frequency component that appears in sampled data acquired at too

low a sampling rate. The following figure shows a 5 MHz sine wave digitized by a 6 MS/s analog-to-digital converter (ADC). In

this figure, the dotted line represents the sine wave being digitized, while the solid line represents the aliased signal

recorded by the ADC at that sample rate.


       

    


or seer eterna.

aacereeess-7*"


  


   


The 5 MHz frequency aliases back in the passband, falsely appearing as a 1 MHz sine wave.


The channel capacity for bandwidth of 3 KHz and signal to noise ratio 30 dB


C = 3000 * log2(1001)


which is a little less than 30 kbps.


Satellite TV Channel


For a satellite TV channel with a signal-to noise ratio of 20 dB and a video bandwidth of 10MHz, we get a maximum data rate

of:


C=10000000 * log2(101)


which is about 66 Mbps.


(b) What is the need of bit stuffing? A bit string 0111111000011111110001110 needs to be transmitted at the data link

layer. What is the string actually transmitted after bit stuffing ? How is bit stuffing implemented in HDLC?


Ans. Bit stuffing is the process of inserting noninformation bits into data to break up bit patterns to affect the synchronous


transmission of information. It is widely used in network and communication protocols, in which bit stuffing is a required


 


Ignou Study Helper-Sunil Poonia Page 3

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


part of the transmission process. Bit stuffing is commonly used to bring bit streams up to a common transmission rate or to

fill frames. Bit stuffing is also used for run-length limited coding.


The bit stuffing concept goes like this


* ach frame begins and ends with a special bit pattern called a flag byte 01111110.

* Whenever sender data link layer encounters five consecutive ones in the data stream, it automatically stuffs a 0 bit

into the outgoing stream.

* Alternatively when receiver sees five consecutive incoming ones followed by a 0 bit, it automatically destuffs the 0

bit before sending the data to the network layer.

Now, moving towards your question we have

Data set to be transmitted 0111111000011111110001110


Separate set whenever encounter 5 consecutive ones 0111111000011111110001110


Data set after bit-stuffing 0111111000011111110001110


So 0111111000011111110001110 is the final string after bit-stuffing but the frame to be transmitted has special bit pattern

as well Therefore the frame is


011111110 0111101111100111110110 011111110


Bit stuffing implemented in HDLC : Consider the data field X of an HDLC frame of arbitrary length N where the bit sequence

X = {x1, x2,x3,..., xN} is uniformly random and where a 0-bit is inserted after n consecutive 1-bits. If we segment the

sequence X into subsequences of n consecutive bits, there will be N—n+ 1 such subsequences. The probability that any

such subsequence will be all 1-bits is 2” . However, the act of bit stuffing affects the number of stuffable subsequences. This

is because stuffable

subsequences are highly correlated. If the subsequence xk, xk+1, xk+2,..., xk+n-1 is all 1-bits the probability that the

subsequence xk+1, xk+2,..., xk+n-1, xk+n is also all 1-bits is considerably greater than 2-n. In fact, since the sequence

is uniformly random, then xk+n will be a 1-bit with a probability of 0.5. The probability that the next subsequence is also all

1-bits is 0.25 and soon,

Consequently, whenever bit stuffing occurs, an additional number of subsequences of n 1-bits may belost. The expected

number of subsequences lost will be: number of subsequences lost will be:

number of subsequences lost will be:


71


Past.

fe L


ial =

So bit stuffing reduces the number of stuffable subsequences by a factor of 1 + L. That is, the rate at which bits will be

stuffed is:


a-n

(1+Z)

Note that this is independent of the length of the sequence N.

We can make the useful observation that L has an upper bound of 1 approached as n becomes large. That is, if we denote

the bit stuffing rate for large n by Re, then


 


Ignou Study Helper-Sunil Poonia Page 4

qTuny


fe=a's Tondv Stuy HELPER


 


 


=~:

www.ignousite.com ese SUNIL POONIA

a"

~ (1+Z)

a"

- n-l 1 \

| 1+ a

im - /

oak


We now illustrate this with an example. In HDLC nis 5. If the subsequence X\41, Xe) Xke3r Xeea, Xkeg (5 all 1-bits the probability

that the subsequence %,42, Xag, Xkaae Xies, Xkve, (S also all 1-bits is 0.5. However, after bit stuffing, this subsequence will no

longer be all 1-bits. By calculating the expectation of the number of all 1-bit subsequences lost (L) we see that:


L=0.5+ 0.25 + 0.125 + 0.0625 = 0.937.


Without stuffing we would expect subsequences consisting of all 1-bits to occur at the rate of 2” = 0.0313. However, the

action of bit stuffing reduces the number of stuffable subsequences. Every time we stuff a subsequence we lose (on average)

0.937 additional stuffable subsequences. Consequently the rate at which we bit stuff a random subsequence will be


4-5


———_=0.0161.

(1.937)


We now test this and other cases with simulation.


Q3. Define checksum and write the algorithm for computing the checksum? Given a sequence frame of 10 bits:

1001100101 and a divisor (polynomial) of 1011. Find the CRC

Ans. A checksum is a value which is computed which allows you to check the validity of something. Typically, checksums are

used in data transmission contexts to detect if the data has been transmitted successfully.

Checksums take on various forms, depending upon the nature of the transmission and the needed reliability. For example,

the simplest checksum is to sum up all the bytes of a transmission, computing the sum in an 8-bit counter. This value is

appended as the last byte of the transmission. The idea is that upon receipt of n bytes, you sum up the first n-1 bytes, and

see if the answer is the same as the last byte. Since this is a bit awkward, a variant on this theme is to, on transmission, sum

up all the bytes, the (treating the byte as a signed, 8-bit value) negate the checksum byte before transmitting it. This means

that the sum of all n bytes should be 0. These techniques are not terribly reliable; for example, if the packet is known to be

64 bits in length, and you receive 64 '\0' bytes, the sum is 0, so the result must be correct. Of course, if there is a hardware

failure that simply fails to transmit the data bytes (particularly easy on synchronous transmission, where no "start bit" is

involved), then the fact that you receive a packet of 64 0 bytes with a checksum result of 0 is misleading; you think you've

received a valid packet and you've received nothing at all. A solution to this is to do something like negate the checksum

value computed, subtract 1 from it, and expect that the result of the receiver's checksum of the n bytes is OxFF (-1, asa

signed 8-bit value). This means that the O-lossage problem goes away.


Algorithm for computing the checksum:


 


Ignou Study Helper-Sunil Poonia Page 5

fees Toudy Stuny HELPee

www.ignousite.com ese SUNIL POONIA


class checksum {

public:

checksum() { clear(); }

void clear() { sum = 0; r= 55665; cl = 52845; c2 = 22719;}

void add(DWORD w);

void add(BOOL w) { add((DWORD)w); }

void add(UINT w) { add((DWORD)w); }

void add(WORD w);

void add(const CString & s);

void add(LPBYTE b, UINT length);

void add(BYTE b);

DWORD get() { return sum; }

protected:

WORD r;

WORD cl;

WORD c2;

DWORD sum;


The CRC or the cyclic redundancy check is a checksum algorithm that detects the inconsistency of data, that is the bit errors

during the data transmission.

As per the question -


The sequence after adding n = 3 extra zeros - 1001100101000

Divisor (of length n+ 1)- 1011


The polynomial division will be -


= x12 +x104+ x7 + x6 4+ x5 + x4/ xt4x41

=x9+x6 40° 4x $14 1/xexe1


=1


The remainder polynomial will be 1.

Hence, the CRC will be 001.


Q4. (a) What are the limitations of MACA? How are these limitations overcome in MACAW? Explain.

Ans. Limitations of MACA:


®  Backoff algorithm: MACAW replaces BEB with MILD (multiplicative increase and linear decrease) to ensure that

backoff interval grows a bit slowly (1.5x instead of 2x) and shrinks really slowly (linearly to minimum value). To

enable better congestion detection, MACAW shares backoff timers among stations by putting this info in headers.


*® Multiple stream model: MACAW uses separate queues for each stream in each node for increased fairness. In

addition, each queue runs independent backoff algorithms. However, all stations attempting to communicate with

the same receiver should use the same backoff value.


e Basic exchange: MACAW replaces RTS-CTS-DATA to RTS-CTS-DS-DATA-ACK with the following extensions:


 


Ignou Study Helper-Sunil Poonia Page 6

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


1. ACK: An extra ACK at the end ensures that errors can be recovered in the link layer, which is much faster than

transport layer recovery. If an ACK is lost, next RTS can generate another ACK for the previous transmission.


2. DS: This signal ensures a 3-way handshake between sender and receiver (similar to TCP) so that everyone within

hearing distance of the two stations know that a data transmission is about to happen. Without the DS packet,

stations vying for the shared media cannot compete properly and one is always starved due to the lack of its

knowledge of the contention period. In short, DS enables synchronization.


3. RRTS: RRTS is basically a proxy RTS, when the actual RTS sender is too far away to fight for the contention slot.

However, there is one scenario where even RRTS cannot guarantee fair contention.


4. Multicast: Multicast is handled by sending data right away after the RTS packet, without waiting for CTS. It suffers

from the same problems as in CSMA, but the authors leave it as. an open challenge.


* Evaluation: This paper presents simulation-based evaluation of MACAW, which shows that MACAW is fairer and

gives higher throughput than MACA.


Limitations overcome in MACAW: Multiple Access with Collision Avoidance (MACA) is a slotted media access control

protocol used in wireless LAN data transmission to avoid collisions caused by the hidden station problem and to simplify

exposed station problem.

The basic idea of MACA is a wireless network node makes an announcement before it sends the data frame to inform other

nodes to keep silent. When a node wants to transmit, it sends a signal called Request-To-Send (RTS) with the length of the

data frame to send. If the receiver allows the transmission, it replies the sender a signal called Clear-To-Send (CTS) with the

length of the frame that is about to receive.

Let us consider that a transmitting station A has data frame to send to a receiving station B. The operation works as follows :

* Station A sends a RTS frame to the receiving station.

® On receiving the RTS, station B replies by sending.a CTS frame.

* On receipt of CTS frame, station A begins transmitting its data frame.


Neighboring Station Neighboring Station

within range of Transmitting Receiving within range of

Station A Station A Station B Station B


 


©) oe KP) « »)) - ())

= By = Te ©


Any node that receives CTS frame knows that itis close to the receiver, therefore, cannot transmit a frame.


Any node that receives RTS frame but not the CTS frame knows that is not close to the receiver to interfere with it, So it is

free to transmit data.


WLAN data transmission collisions may still occur, and the MACA for Wireless (MACAW) is introduced to extend the function

of MACA. It requires nodes sending acknowledgements after each successful frame transmission, as well as the additional

function of Carrier Sense.


 


Ignou Study Helper-Sunil Poonia Page 7

Fey Towou Stvpy HeLPen


se e

www.ignousite.com Sr SUMIL POONA

Neighboring Station Transmitting Receiving Neighboring Station

within range of Station A Station B within range of

Station A RTS Station B


()) «»)) —=— € ))) 8... @))

Sa


RTS — Request-To-Send frame

CTS —+* Clear-To-Send frame

DS —~ Data Sending frame


ACK — Acknowledgment frame


(b) How does CSMA/CD work? Write all the steps. Explain the binary exponential backoff algorithm in case of collision.

How to calculate backoff time?

Ans. CSMA/CD work; CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media-access control method that

was widely used in Early Ethernet technology/LANs, When there used to be shared Bus Topology and each Nodes(

Computers) were connected By Coaxial Cables.Now a Days Ethernet is Full Duplex and CSMA/CD is not used as Topology is

either Star (connected via Switch or Router) or Point to Point ( Direct Connection) but they are still supported though.

Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data through that channel. In this

case all 'n’ stations would want to. access the link/channel to transfer their own data.Problem arises when more than one

station transmits the data at the moment. In this ‘case, there will be collisions in the data from different stations.

CSMA/CD is one such technique where different stations that follow this protocol agree on some terms and collision

detection measures for effective transmission. This protocol decides which station will transmit when so that data reaches

the destination without corruption.

How CSMA/CD works?


*® Step 1: Check if the sender is ready for transmitting data packets.


* Step 2: Check if the transmission link is idle?

Sender has to keep on checking if the transmission link/medium is idle. For this it continuously senses transmissions

from other nodes. Sender sends dummy data on the link. If it does not receive any collision signal, this means the

link is idle at the moment. If it senses that the carrier is free and there are no collisions, it sends the data. Otherwise

it refrains from sending data.


® Step 3: Transmit the data & check for collisions.

Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system. It checks for the successful

and unsuccessful transmissions through collision signals. During transmission, if collision signal is received by the

node, transmission is stopped. The station then transmits a jam signal onto the link and waits for random time

interval before it resends the frame. After some random time, it again attempts to transfer the data and repeats

above process.


® Step 4: If no collision was detected in propagation, the sender completes its frame transmission and resets the

counters.


 


Ignou Study Helper-Sunil Poonia Page 8

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


Binary Exponential Backoff Algorithm in case of Collision


Step 1:- The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that

all the other stations detect collision.


Step 2:- The station increments the retransmission counter, c, that denote the number of collisions.


Step 3:- The station selects a random number of slot times in the range O and 2c — 1. For example, after the first collision (i.e.

c = 1), the station will wait for either 0 or 1 slot times. After the second collision (i.e. c = 2), the station will wait anything

between 0 to 3 slot times. After the third collision (i.e. c = 3), the station will wait anything between 0 to 7 slot times, and so

forth.


Step 4:- If the station selects a number k inthe range 0 and 2c — 1, then


Back_off_time =k = Time slot,


where a time slot is equal to round trip time (RTT).


Step 5:- And the end of the backoff time, the station attempts retransmission by continuing with the CSMA/CD algorithm.

Step 6:- If the maximum number of retransmission attempts is reached, then the station aborts transmission.


Back Off Time

In CSMA / CD protocol,


e After the occurrence of collision, station waits for some random back off time and then retransmits.

* This waiting time for which the station waits before retransmitting the data is called as back off time.

® Back Off Algorithm is used for calculating the back off time.


Back off time = k x Time slot

where value of one time slot = 1 RTT


QS5. Explain the operation of the distance vector routing protocol. Whatisithe reason of count to infinity problem in

distance vector routing protocol?. How isthe above problem overcome in link state routing algorithm?


Ans. A distance-vector routing (DVR) protocol requires that a router inform its neighbors of topology changes periodically.


Historically known as the old ARPANET routing algorithm (or known as Bellman-Ford algorithm).


Bellman Ford Basics — Each router maintains a Distance Vector table containing the distance between itself and ALL possible


destination nodes. Distances,based on a chosen metric, are computed using information from the neighbors’ distance


vectors.


Count to infinity problem in distance vector routing protocol: The main issue with Distance Vector Routing (DVR) protocols


is Routing Loops, since Bellman-Ford Algorithm cannot prevent loops. This routing loop in DVR network causes Count to


Infinity Problem. Routing loops usually occur when any interface goes down or two-routers send updates at the same time.


Counting to infinity problem:


 


Ignou Study Helper-Sunil Poonia Page 9

stun


fee's Touou Stvpy HELPER

www.ignousite.com eed SUNIL POONIA


és --—--—----&


So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries for each other. B will

know that it can get to C at a cost of 1, and A. will know that it can get to C via B ata cost of 2.


(e-2 e-e


If the link between B and C is disconnected, then B will know that it can no longer get to C via that link and will remove it

from it’s table, Before it can send any updates it’s possible that it will receive an update from A which will be advertising that

it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a route to C via A at a cost of 3. A will then receive

updates from. B later and update its cost to 4. They will then go on feeding each other bad information toward infinity which

is called as Count to Infinity problem.


Link State Routing :


® = Itis a dynamic routing algorithm in which each router shares knowledge of its neighbors with every other router in

the network.


*® Arouter sends its information about its neighbors only to all the routers through flooding.


® Information sharing takes place only whenever there is a change.


® It makes use of Dijkastra’s Algorithm for making routing tables.


* Problems — Heavy traffic due to.flooding of packets.

— Flooding can result in infinite looping which can be solved by using Time to live (TTL) field.


Q6. (a) Define and differentiate between flow control and congestion control mechanisms.interms of where they are

applied in a packet switched network. What are the two categories of congestion contro] mechanisms and what

policies are adopted by each category to control congestion?


Ans.


Both Flow Control and Congestion Control are the traffic controlling methods in different situations.


The main difference between flow control and:congestion control is that, In flow control, Traffics are controlled which are


flow from sender to a receiver. On the other hand, In congestion control, Traffics are controlled entering to the network.


Let's see the difference between flow control and congestion control:


 


Ignou Study Helper-Sunil Poonia Page 10

yes Toney Stvpy Heres

‘Ned SUNIL POONA


www.ignousite.com


 


 


 


 


 


 


i: In flow control, Traffics are controlled which are | In this, Traffics are controlled entering to the network.

flow from sender to a receiver.


2. Data link layer and Transport layer handle it. Network layer and Transport layer handle it.


35 In this, Receiver’s data is prevented from being In this, Network is prevented from congestion.

overwhelmed.


4. In flow control, Only sender is responsible for In this, Transport layer is responsible for the traffic.

the traffic.


5. In this, Traffic is prevented by slowly sending by | In this, Traffic is prevented by slowly transmitting by

the sender. the transport layer.


6. In flow control, buffer overrun is restrained in In congestion control, buffer overrun is. restrained in

the receiver. the intermediate systems in the network.


 


 


 


 


Flow control and congestion control mechanisms applied in a packet switched network :


Congestion control is needed when buffers in packet switches overflow or congest. Flow control is needed when the buffers

at the receiver are not depleted as fast as the data arrives. Flow control can be done ona link-by-link basis or end-to-end

basis. If there is a queue on the input side of a switch, and link-by-link flow control is used, then the switch tells its

immediate neighbor to slow down if the input queue fills up as a "flow control" action. If viewed as a "congested buffer,"

then the switch tells the source of the data stream to slow down using congestion control notifications. When output buffers

at a switch fill up and packets are dropped this leads to congestion control actions.


The two categories of congestion control mechanisms : Congestion control refers to the techniques used to control or

prevent congestion. Congestion control techniques can be broadly classified into two categories:


 


Congestion

Control Techniques


 


 


 


 


 


 


Open loop Congestion Closed loop.Congestion


Control Control


 


Open Loop Congestion Control:

Open loop congestion control policies are applied to prevent congestion before it happens. The congestion control is

handled either by the source or the destination.


 


Ignou Study Helper-Sunil Poonia Page 11

qTuny


fe=a's Tondv Stuy HELPER


‘=?

www.ignousite.com ese SUNIL POONIA


Policies adopted by open loop congestion control:


1.


Retransmission Policy :


It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or

corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network.


To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize

efficiency.


Window Policy :


The type of window at the sender side may also affect the congestion, Several packets in the Go-back-n window are

resent, although some packets may be received successfully at the receiver side. This duplication may increase the

congestion in the network and making it worse.


Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.

Discarding Policy:


A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time

partially discards the corrupted or less sensitive package and also able to maintain the quality of a message.


In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain

the quality of the audio file.


Acknowledgment Policy :


Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the

receiver may also affect congestion. Several approaches can be used to prevent congestion related to

acknowledgment.


The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet.

The receiver should send a acknowledgment only if it has to sent a packet or a timer expires.


Admission Policy :


In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the

resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is

a congestion.in the network, router should deny establishing a virtual network connection to prevent further

congestion.


All the above policies are adopted to prevent congestion before it happens in the network.


Closed Loop Congestion Control

Closed loop congestion control technique is used to treat or alleviate congestion after it happens. Several techniques are

used by different protocols; some of them are:


1.


Backpressure :


Backpressure is a technique in which a. congested node stop receiving packet from upstream node. This may cause

the upstream node or nodes to become congested and rejects receiving data from above nodes. Backpressure is a

node-to-node congestion control technique that propagate in the opposite direction of data flow. The backpressure

technique can be applied only to virtual circuit where each nade has information of its above upstream node.


 


Ignou Study Helper-Sunil Poonia Page 12

454% Toney Stvpy HELPER

www.ignousite.com Se? Sue POONIA


Backpressure Backpressure


<= <—


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


t 1 I 1

Source Congestion Destination


 


oo

Data Flow


In above diagram the 3rd node is congested and stops receiving packets as ‘a result 2nd node may be get congested due to

slowing down of the output data flow. Similarly Ist node may get congested and informs the source to slow down.


2. Choke Packet Technique :

Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet is a

packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the utilization

at each of its output lines. whenever the resource utilization exceeds the threshold value which is set by the

administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The

intermediate nodes through which the packets has traveled are not warned about congestion.


Choke Packet


t i 1 1

Source Congestion Destination


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


Data Flow


3. Implicit Signaling :

In implicit signaling, there is no communication between the congested nodes.and the source. The source guesses

that there is congestion in a network. For example when sender sends several packets and there is no

acknowledgment for a while, one assumption is that there is a congestion.


4. Explicit Signaling :

In explicit signaling, if anode experiences congestion it can explicitly sends a packet to the source or destination to

inform about congestion. The difference between choke packet and explicit signaling is that the signal is included in

the packets that carry data rather than creating different packet as in case of choke packet technique.

Explicit signaling can occur in either forward or backward direction.


* Forward Signaling : In forward signaling signal is sent in the direction of the congestion. The destination is warned

about congestion. The reciever in this case adopt policies to prevent further congestion.


 


Ignou Study Helper-Sunil Poonia Page 13

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


® Backward Signaling : In backward signaling signal is sent in the opposite direction of the congestion. The source is

warned about congestion and it needs to slow down.


(b) How is congestion controlled in TCP using slow start algorithm? Clearly show the window adjustment.


Ans. TCP uses a congestion window and a congestion policy that avoid congestion.Previously, we assumed that only receiver

can dictate the sender's window size. We ignored another entity here, the network. If the network cannot deliver the data

as fast as it is created by the sender, it must tell the sender to slow down. In other words, in addition to the receiver, the

network is a second entity that determines the size of the sender’s window.


Congestion policy in TCP:


1. Slow Start Phase: starts slowly increment is exponential to threshold

2. Congestion Avoidance Phase: After reaching the threshold increment is by 1

3. Congestion Detection Phase: Sender goes back to Slow start phase or Congestion avoidance phase.


Slow Start Phase : exponential increment — In this phase after every RTT the congestion window size increments

exponentially.


Initially cwnd = 1


After 1 RTT, cwnd = 24(1) =2


2 RTT, cwnd = 2°(2) =4


3 RTT, cwnd = 24(3)=8


Congestion Avoidance Phase : additive increment — This phase starts after the threshold value also denoted as ssthresh.

The size of cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.


Initially ewnd =i


After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2


3 RTT, cwnd =i+3


Congestion Detection Phase : multiplicative decrement — If congestion occurs, the congestion window size is decreased. The

only way a sender can guess that congestion has occurred is the need to retransmit a segment. Retransmission is needed to

recover a missing packet which is assumed to have been dropped by a router due to congestion. Retransmission can occur in

one of two cases: when the RTO timer times out or when three duplicate ACKs are received.


® Case 1: Retransmission due to Timeout —In this case congestion possibility is high.

a. ssthresh is reduced to half of the current window size.

b. set cwnd=1

c. start with slow start phase again.

e Case 2: Retransmission due to 3 Acknowledgement Duplicates — In this case congestion possibility is less.


a. ssthresh value reduces to half of the current window size.

b. set cwnd= ssthresh

c. start with congestion avoidance phase


 


Ignou Study Helper-Sunil Poonia Page 14

fe fs Towou Stuny HeLPee

www.ignousite.com eed SUNIL POONIA


Show the window adjustment: Assume a TCP protocol experiencing the behavior of slow start. At Sth transmission round

with a threshold (ssthresh) value of 32 goes into congestion avoidance phase and continues till 10th transmission. At 10th

transmission round, 3 duplicate ACKs are received by the receiver and enter into additive increase mode. Timeout occurs at

16th transmission round, Plot the transmission round (time) vs congestion window size of TCP segments.


  


 


 


Q7. What is the essential property of Fiestel Cipher network? Explain


Ans. Feistel Cipher model is a structure or a design used to develop many block ciphers such as DES. Feistel cipher may have

invertible, non-invertible and self invertible components in its design. Same encryption as well as decryption algorithm is

used. A separate key is used for each round. However same round keys are used for encryption as well as

decryption.


Encryption Process


The encryption process uses the Feistel structure consisting multiple rounds of processing of the plaintext, each round

consisting of a “substitution” step followed by a permutation step.


Feistel Structure is shown in the following illustration -


 


Ignou Study Helper-Sunil Poonia Page 15

faezs's Toby Spy HELPER


 


 


 


 


 


 


 


 


     

 


 


 


www.ignousite.com Se? Sue POONIA

Plaintext block

(Divide into two halves. L and R)

Round Keys

K

Round, 1

K,

Round ,

pl pR

Vv Ky,

Round, eo

FRR) <=


 


 


 


Ciphestext block


 


 


The input block to each round is divided into two halves that can be denoted as L and R for the left half and the right

half.


In each round, the right half of the block, R, goes through unchanged. But the left half, L, goes through an operation

that depends on R and the encryption key. First, we apply an encrypting function ‘f that takes two input - the key K

and R. The function produces the output f(R,K). Then, we XOR the output of the mathematical function with L.


In real implementation of the Feistel Cipher, such as DES, instead of using the whole encryption key during each

round, a round-dependent key (a subkey) is derived from the encryption key. This means that each round uses a

different key, although all these subkeys are related to the original key.


The permutation step at the end of each round swaps the modified L and unmodified R. Therefore, the L for the next

round would be R of the current round. And R for the next round be the output L of the current round.


Above substitution and permutation steps form a ‘round’. The number of rounds are specified by the algorithm

design.


 


Ignou Study Helper-Sunil Poonia Page 16

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


® Once the last round is completed then the two sub blocks, ‘R’ and ‘LU’ are concatenated in this order to form the

ciphertext block.


The difficult part of designing a Feistel Cipher is selection of round function ‘f’. In order to be unbreakable scheme, this

function needs to have several important properties that are beyond the scope of our discussion.


Decryption Process


The process of decryption in Feistel cipher is almost similar. Instead of starting with a block of plaintext, the ciphertext block

is fed into the start of the Feistel structure and then the process thereafter is exactly the same as described in the given

illustration.


The process is said to be almost similar and not exactly same. In the case of decryption, the only difference is that the

subkeys used in encryption are used in the reverse order.


The final swapping of ‘L’ and ‘R’ in last step of the Feistel Cipher is essential. If these are not swapped then the resulting

ciphertext could not be decrypted using the same algorithm.


Number of Rounds


The number of rounds used in a Feistel Cipher depends on desired security from the system. More number of rounds

provide more secure system. But at the same time, more rounds mean the inefficient slow encryption and decryption

processes. Number of rounds in the systems thus depend upon efficiency—security tradeoff.


Q8. What is the utility of a digital certificate? Where is it used? How are these signatures created? What are its

components?

Ans. A digital certificate, also known as a public key certificate, is used to cryptographically link ownership of a public key

with the entity that owns it. Digital certificates are for sharing public keys to be used for encryption and authentication.

Digital certificates include the public key being certified, identifying information about the entity that owns the public key,

metadata relating to the digital certificate and a digital signature of the public key created by the issuer of the certificate.

The distribution, authentication and revocation of digital certificates are the primary purposes of the public key

infrastructure (PKI), the system by which public keys are distributed and authenticated. Public key cryptography depends on

key pairs: one a private key to be held by the owner and used for signing and decrypting, and one a public key that can be

used for encryption of data sent to the public key owner or authentication of the certificate holder's signed data. The digital

certificate enables entities to share their public key in a way that can be authenticated.

Where is it used: Digital certificates are used in public key cryptography functions; they are most commonly used for

initializing secure SSL connections between web browsers and web servers. Digital certificates are also used for sharing keys

to be used for public key encryption and authentication of digital signatures.

Digital certificates are used by all major web browsers and web servers to provide assurance that published content has not

been modified by any unauthorized actors, and to share keys for encrypting and decrypting web content. Digital certificates

are also used in other contexts, both online and offline, for providing cryptographic assurance and privacy of data.


These signatures created:


The steps followed in creating digital signature are :


 


Ignou Study Helper-Sunil Poonia Page 17

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


1. Message digest is computed by applying hash function on the message and then message digest is encrypted using

private key of sender to form the digital signature. (digital signature = encryption (private key of sender, message

digest) and message digest = message digest algorithm(message)).


2. Digital signature is then transmitted with the message.(message + digital signature is transmitted)


3. Receiver decrypts the digital signature using the public key of sender.(This assures authenticity,as only sender has


his private key so only sender can encrypt using his private key which can thus be decrypted by sender’s public key).


The receiver now has the message digest.


The receiver can compute the message digest from the message (actual message is sent with the digital signature).


6. The message digest computed by receiver and the message digest (got by decryption on digital signature) need to be

same for ensuring integrity.


uP


Message digest is computed using one-way hash function, i.e. a hash function in which computation of hash value of a

message is easy but computation of the message from hash value of the message is very difficult.


Original

Data


Original

Data


 


 


 


Identical

Hashes:

Validate

Data

Integrity

One way . Digital Digital = ‘One way

Hash Signature Signature i 9) Hash

Public

Key


 


Its components: It has the following components


* Version: It is used to identify the version of X.509,


* Certificate serial number: it is a unique integer number that is generated by CA.


* Signature algorithm Identifier: it is used to identify the algorithm used by the CA at the time of signature.

* Issuer Name: it shows the name of the CA who issues a certificate


* Validity: It is used to show the validity of the certificate


* Subject Name: It shows the name of the user to whom the certificate belongs.


* Subject public key information: It contains the public key of the user and algorithm bused for the key.


Version 2

It has two additional fields


* Issuer unique identifier: It helps to find the CA uniquely if two or more CA have used the same issuer name.

© Subject unique identifier: It helps to find the user uniquely if two or more user has used the same name.


Version 3 : Version 3 contains many extensions of digital certificates.


 


Ignou Study Helper-Sunil Poonia Page 18

qTuny


fs Touou Stuny HeLree

Nut SUNIL POONIA


www.ignousite.com

Q9. Discuss the features of IPSec.


Ans. The IP security (IPSec) is an Internet Engineering Task Force (IETF) standard suite of protocols between 2

communication points across the IP network that provide data authentication, integrity, and confidentiality. It also defines

the encrypted, decrypted and authenticated packets. The protocols needed for secure key exchange and key management

are defined in it.


Uses of IP Security


IPsec can be used to do the following things:


*® To encrypt application layer data.


* To provide security for routers sending routing data across the public internet.


* To provide authentication without encryption, like to authenticate that the data originates from a known sender.


* To protect network data by setting up circuits using IPsec tunneling in which all data is being sent between the two

endpoints is encrypted, as with a Virtual Private Network(VPN) connection.


Components of IP Security

It has the following components:

1. Encapsulating Security Payload (ESP) —

It provides data integrity, encryption, authentication and anti replay. It also provides authentication for payload.

2. Authentication Header (AH) —

It also provides data integrity, authentication and anti replay and it does not provide encryption. The anti replay

protection, protects against unauthorized transmission. of packets. It does not protect data’s confidentiality.

3. Internet Key Exchange (IKE) —

It is a network security protocol designed to dynamically exchange encryption keys and find a way over Security

Association (SA) between 2 devices. The Security Association (SA) establishes shared security attributes between 2

network entities to support secure communication. The Key Management Protocol (ISAKMP) and Internet Security

Association which provides a framework for authentication and key exchange. ISAKMP tells how the set up of the

Security Associations (SAs) and how direct connections between two hosts that are using IPsec.


Internet Key Exchange (IKE) provides message content protection and also an open frame for implementing standard

algorithms such as SHA and MDS. The algorithm’s IP sec users produces a unique identifier for each packet. This identifier

then allows a device to determine whether a packet has been correct or not. Packets which are not authorized are discarded

and not given to receiver.


 


 


 


 


 


 


 


 


 


 


 


 


 


 


IP HDR Tce DATA Original Packet

IP ESP TEP Data ESP ESP

HDR HDR Trailer |Authent-

-ication

Mm Encryption _—,


a@£ — Authentication ———


 


Ignou Study Helper-Sunil Poonia Page 19

fests Tony Stvny HELree

www.ignousite.com ese SUNIL POONIA

Working of IP Security :


1. The host checks if the packet should be transmitted using IPsec or not. These packet traffic triggers the security

policy for themselves. This is done when the system sending the packet apply an appropriate encryption. The

incoming packets are also checked by the host that they are encrypted properly or not.


2. Then the IKE Phase 1 starts in which the 2 hosts( using IPsec ) authenticate themselves to each other to start a

secure channel. It has 2 modes. The Main mode which provides the greater security and the Aggressive mode which

enables the host to establish an IPsec circuit more quickly.


3. The channel created in the last step is then used to securely negotiate the way the IP circuit will encrypt data accross

the IP circuit.


4. Now, the IKE Phase 2 is conducted over the secure channel in which the two hosts negotiate the type of

cryptographic algorithms to use on the session and agreeing on secret keying material tobe used with those

algorithms.


5. Then the data is exchanged across the newly created IPsec encrypted tunnel. These packets are encrypted and

decrypted by the hosts using IPsec SAs.


6. When the communication between the hosts is completed or the session times out then the |Psec tunnel is

terminated by discarding the keys by both the hosts.


Q10. How is Silly Window Syndrome created by a receiver? What are the proposed solutions? Discuss.

Ans. Silly Window Syndrome is a problem that arises due to poor implementation of TCP. It degrades the TCP performance

and makes the data transmission extremely inefficient: The problem is called so because:


1, It causes the sender window size to shrink to a silly value.

2. The window size shrinks to such an-extent where the data being transmitted is smaller than TCP Header.


What are the causes?

The two major causes of this syndrome are as follows:


1. Sender window transmitting one byte of data repeatedly.

2. Receiver window accepting one byte of data repeatedly.


Cause-1: Sender window transmitting one byte of data repeatedly —


Suppose only one byte of data is generated by an application . The poor implementation of TCP leads to transmit this small

segment of data.Every time the application generates a byte of data, the window transmits. it. This makes the transmission

process slow and inefficient. The problem is solved by Nagle’s algorithm.


Nagle’s algorithm suggests:


1. Sender should send only the first byte on receiving one byte data from the application.

2. Sender should buffer all the rest bytes until the outstanding byte gets acknowledged.

3. In other words, sender should wait for 1 RTT(Round Trip Time).


After receiving the acknowledgement, sender should send the buffered data in one TCP segment. Then, sender should

buffer the data again until the previously sent data gets acknowledged.


Cause-2: Receiver window accepting one byte of data repeatedly —

Suppose consider the case when the receiver is unable to process all the incoming data.In such a case, the receiver will


 


Ignou Study Helper-Sunil Poonia Page 20

qTuny


fs Touou Stuny HeLree

www.ignousite.com ese SUNIL POONIA


advertise a small window size.The process continues and the window size becomes smaller and smaller.A stage arrives when

it repeatedly advertises window size of 1 byte.This makes receiving process slow and inefficient. The solution to this problem

is Clark’s Solution.


Clark's solution suggests:


1. Receiver should not send a window update for 1 byte.

2. Receiver should wait until it has a decent amount of space available,

3. Receiver should then advertise that window size to the sender.


1. Nagle’s algorithm is turned off for those applications that require data to be immediately send. Nagle’s algorithm

can introduce delay as it sends only one data segment per round trip.

2. Both Nagle’s as well as Clark's algorithm can work together.Both are complementary.


Example:


A fast typist can do 100 words a minute and each word has an average of 6 characters. Demonstrate Nagle’s algorithm by

showing the sequence of TCP segment exchanges between a client with input from our fast typist and a server. Indicate how

many characters are contained in each segment sent from the client.Assume that the client and server are in the same LAN

and the RTT is 20 ms?


Nagle’s algorithm suggests:


Sender should wait for 1 RTT before sending the data. The amount of data received from the application layer in 1 RTT

should be sent to the receiver.


Amount of data accumulated in 1 RTT,


= (600 characters / 1 minute) x 20 msec


= (600 characters / 60 sec) x 20 msec


= (10 characters / 103 msec) x 20 msec


= 0.2 characters


From here, we observe:


Even if the sender waits for 1 RTT, not even a single character is produced. So, sender will have to wait till it receives at least

1 character. Then, sender sends it in one segment. Thus, one character will be sent per segment. Assuming the TCP header

length is 20 bytes, 41 bytes of data will be sent in each segment.


Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS

Theory Course at a student-friendly price and become industry ready.


MCS 022 Solution (2019-2020)

Q1. a) What are the two criteria for classification of the advanced operating systems? Discuss any two operating systems in both the categories.
Answer : -
Advanced Operating Systems
  1. Architecture Driven System
    1. Network Operating System
    2. Distributed Operating System
    3. Multiprocessor Operating System
  2. Application Driven System
    1. Database Operating System
    2. Real-Time Operating System
    3. Multimedia Operating System

Network Operating System
A network operating system is a specialized operating system for a network device such as a router, switch or firewall.
Historically operating systems with networking capabilities were described as network operating system, because they allowed personal computers (PCs) to participate in computer networks and shared file and printer access within a local area network (LAN).
Network operating systems can be embedded in a router or hardware firewall that operates the functions in the network layer (layer 3).
Some important Network OS are listed below -
  • Cisco IOS - Cisco Internet work Operating System (IOS) is a family of network operating systems used on many Cisco Systems routers and current Cisco network switches. Earlier, Cisco switches ran Cat OS. IOS is a package of routing, switching, internet working and telecommunications functions integrated into a multitasking operating system.
  • Cisco NX-OS - NX-OS is a network operating system for the Nexus-series Ethernet switches and MDS-series  fibre Channel storage area network switches made by Cisco Systems.
  • AppleShare - Apple Share was a product from Apple Computer which implemented various network services. Its main purpose was to act as a file server, using the AFP protocol.
etc.
Distributed Operating System
A Distributed Operating System is an operating system that runs on a network of computers.
In a Distributed Operating System, each user thinks that running on a single large system with one operating system. The users don’t need to know where the files in the network.
Advantages
  • Failure of one system will not affect the other network communication, because all systems are independent from each other.
  • Since resources are being shared, computation is highly fast.
  • The data exchange speed is increased by using electronic mails.
  • Load on host computer reduces.
  • These systems are easily scalable as many systems can be easily added to the network.
  • Delay in data processing reduces.
Disadvantages
  • Failure of the main network will stop the entire communication.
  • To establish distributed systems the language which are used are not well defined yet.
  • They are very expensive.
  • The underlying software is highly complex.

Real-Time Operating System
Real time system means that the system is subjected to real time, i.e., response should be guaranteed within a specified timing constraint or system should meet the specified deadline. For example: missile systems, air traffic control systems, robots etc.
There are two types of Real-Time Operating System -
  • Hard Real-Time Systems - Hard Real-Time Systems are require for the applications where time constraints are very strict and even the shortest possible delay is not acceptable. These systems are built for saving life like automatic parachutes or air bags which are required to be readily available in case of any accident.
    In hard real-time systems, secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual memory is almost never found.
  • Soft Real-Time Systems - Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks and retains the priority until it completes. Soft real-time systems have limited utility than hard real-time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary rovers, etc.



Q1. b) Discuss the key characteristics of modern operating systems.
Answer : -
Multi-threading - Multi-threading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. The process is divided into threads that can run simultaneously.
Symmetric Multi-processing - In Symmetric Multi-processing system a computer has more than one processor. These processors can share the same memory, data path, I/O facilities and also the same job for execution.
Advantages of Symmetric Multiprocessing -
  • The throughput of the system is increased in symmetric multiprocessing. As there are multiple processors, more processes are executed.
  • Symmetric multiprocessing systems are much more reliable than single processor systems. Even if a processor fails, the system still endures. Only its efficiency is decreased a little.
Distributed Operating System - A Distributed Operating System is an operating system that runs on a network of computers. The operating system, memory files shared by the number of users in the network from the server. In a Distributed Operating System, each user thinks that running on a single large system with one operating system. The users don’t need to know where the files in the network.
Micro-Kernel Architecture - Kernel is the core part of an operating system which manages system resources. It also acts like a bridge between application and hardware of the computer. It is one of the first programs loaded on start-up (after the Boot loader).
A Micro-Kernel is the minimum software that is required to correctly implement an operating system. This includes memory management, process scheduling mechanisms and basic inter-process communication.
Object-Oriented Design - It is used for adding modular extensions to a small kernel. It also enables programmers to customize an operating system without disrupting system integrity.



Q2. Discuss the objectives and primary functions of the following connecting devices :
  • Hubs
  • Routers
Answer : -
Hubs
If multiple incoming connections need to be connected with multiple outgoing connections, then a hub is required. In data communications, a hub is a place of convergence where data arrives from one or more directions and is forwarded out in one or more other directions. Hubs are multi-port repeaters, and as such they obey the same rules as repeaters. They operate at the OSI Model Physical Layer. Hubs are used to provide a physical Star Topology.
Routers
In an environment consisting of several network segments with different protocols and architecture, a bridge may not be adequate for ensuring fast communication among all of the segments. A complex network needs a device which not only knows the address of each segment, but also can determine the best path for sending data and filtering broadcast traffic to the local segment. Such a device is called a Router. Routers are both hardware and software devices. Router operates at the Network Layer of the OSI Model.



Q3. Explain virtual to physical address mapping concepts with the help of an abstract model.
Answer : - Your computer has two types of memory, Random Access Memory (RAM) and Virtual Memory. All programs use RAM, but when there isn't enough RAM for the program you're trying to run, Windows temporarily moves information that would normally be stored in RAM to a file on your hard disk called a Paging File. The amount of information temporarily stored in a paging file is also referred to as virtual memory. Using virtual memory, in other words, moving information to and from the paging file, frees up enough RAM for programs to run correctly.
A virtual address does not represent the actual physical location of an object in memory; instead, the system maintains a page table for each process, which is an internal data structure used to translate virtual addresses into their corresponding physical addresses. Each time a thread references an address, the system translates the virtual address to a physical address.
In this model, both virtual and physical memory are divided up into handy sized chunks called pages. These pages are all the same size. Each of these pages is given a unique number; the Page Frame Number (PFN). For every instruction in a program, for example to load a register with the contents of a location in memory, the CPU performs a mapping from a virtual address to a physical one. Also, if the instruction itself references memory then a translation is performed for that reference.
The address translation between virtual and physical memory is done by the CPU using page tables which contain all the information that the CPU needs. Typically there is a page table for every process in the system. Above figure shows a simple mapping between virtual addresses and physical addresses using page tables for Process X and Process Y. This shows that Process X’s virtual PFN 0 is mapped into memory in physical PFN 3 and that Process Y’s virtual PFN 2 is mapped into physical PFN 5. Each entry in the theoretical page table contains the following information :
  • The virtual PFN,
  • The physical PFN that it maps to,
  • Access control information for that page.



Q4. Describe important Linux directories and files.
Answer : -
  1. / – Root
    • Every single file and directory starts from the root directory.
    • Only root user has write privilege under this directory.
    • Please note that /root is root user’s home directory, which is not same as /.
  2. /bin – User Binaries
    • Contains binary executables.
    • Common linux commands you need to use in single-user modes are located under this directory.
    • Commands used by all the users of the system are located here. For example : ls, ping, grep, cp, etc.
  3. /boot – Boot Loader Files
    • Contains boot loader related files.
    • Kernel initrd, vmlinux, grub files are located under /boot. For example : initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic.
  4. /dev – Device Files
    • Contains device files.
    • These include terminal devices, usb, or any device attached to the system. For example: /dev/tty1, /dev/usbmon0, etc.
  5. /etc – Configuration Files
    • Contains configuration files required by all programs.
    • This also contains startup and shutdown shell scripts used to start/stop individual programs. For example: /etc/resolv.conf, /etc/logrotate.conf, etc.
  6. /home – Home Directories
    • Home directories for all users to store their personal files. For example: /home/debabrata, /home/amit, etc.
  7. /lib – System Libraries
    • Contains library files that supports the binaries located under /bin and /sbin
    • Library filenames are either ld* or lib*.so.*, for example: ld-2.11.1.so, libncurses.so.5.7, etc.
  8. /media – Removable Media Devices
    • Temporary mount directory for removable devices. For examples, /media/cdrom for CD-ROM, /media/floppy for floppy drives, /media/cdrecorder for CD writer, etc.
  9. /mnt – Mount Directory
    • Temporary mount directory where sysadmins can mount filesystems.
  10. /opt – Optional add-on Applications
    • opt stands for optional.
    • Contains add-on applications from individual vendors.
    • add-on applications should be installed under either /opt/ or /opt/ sub-directory.
  11. /proc
    • Contains information about system process.
    • This is a pseudo filesystem contains information about running process. For example : /proc/{pid} directory contains information about the process with that particular pid.
    • This is a virtual filesystem with text information about system resources. For example : /proc/uptime
  12. /sbin – System Binaries
    • Just like /bin, /sbin also contains binary executables.
    • Linux commands located under this directory are used typically by system aministrator, for system maintenance purpose. For example : reboot, fdisk, ifconfig, etc.
  13. /srv – Service Data
    • srv stands for service.
    • Contains server specific services related data. For example, /srv/cvs contains CVS related data.
  14. /tmp – Temporary Files
    • Directory that contains temporary files created by system and users.
    • Files under this directory are deleted when system is rebooted.
  15. /usr – User Programs
    • Contains binaries, libraries, documentation, and source-code for second level programs.
    • /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin, look under /usr/bin. For example : at, awk, less, etc.
    • /usr/sbin contains binary files for system administrators. If you can’t find a system binary under /sbin, look under /usr/sbin. For example: sshd, useradd, userdel, etc.
    • /usr/lib contains libraries for /usr/bin and /usr/sbin
    • /usr/local contains users programs that you install from source. For example, when you install apache from source, it goes under /usr/local/apache2
    • /usr/src holds the Linux kernel sources, header-files and documentation.
  16. /var – Variable Files
    • Content of the files that are expected to grow can be found under this directory.
    • This includes — system log files (/var/log); packages and database files (/var/lib), emails (/var/mail); print queues (/var/spool), lock files (/var/lock), temp files needed across reboots (/var/tmp), etc.



Q5. a) What will be the output of followings?
Answer : - Consider that the "myfile.text" contains the following lines -
Motherboard
Processor
RAM
Monitor
Hard Disk
DVD Writer
Graphic Card
RAM
Motherboard
Hard Disk
mohan play football

$ cat myfile.text | head -7 | tail -5
Answer : - Print the last 5 lines from the first 7 lines of the "myfile.text" file
root@UBUNTU-PC:/MyDirectory# cat myfile.text | head -7 | tail -5
RAM
Monitor
Hard Disk
DVD Writer
Graphic Card
root@UBUNTU-PC:/MyDirectory#

$ cat filename | more ls -l > temp
Answer : -

$ sort myfile.text | unique
Answer : - Wrong Command
Right command is - sort myfile.text | uniq
First sort the file contents alphabetically, then remove the duplicate lines.
root@UBUNTU-PC:/MyDirectory# sort myfile.text | uniq
DVD Writer
Graphic Card
Hard Disk
mohan play football
Monitor
Motherboard
Processor
RAM
root@UBUNTU-PC:/MyDirectory#

$ cat myfile.text | grep “mohan” | wc -l
Answer : - Count the number of lines present in the "myfile.text" file which contains the particular pattern "mohan".
root@UBUNTU-PC:/MyDirectory# cat myfile.text | grep “mohan” | wc -l
1
root@UBUNTU-PC:/MyDirectory#

$ ls -l | grep “jan”
Answer : - Display the detailed list of all the files and directories which name is contain the pattern "jan"



Q5. b) What options can be used with grep command?
Answer : - Option which can be used with grep command
Options Description
-c : This prints only a count of the lines that match a pattern
-h : Display the matched lines, but do not display the filenames.
-i : Ignores, case for matching
-l : Displays list of a filenames only.
-n : Display the matched lines and their line numbers.
-v : This prints out all the lines that do not matches the pattern
-e exp : Specifies expression with this option. Can use multiple times.
-f file : Takes patterns from file, one per line.
-E : Treats pattern as an extended regular expression (ERE)
-w : Match whole word
-o : Print only the matched parts of a matching line,
 with each such part on a separate output line.


Q6. Briefly describe the features of followings and how are they configured in Linux?
  • Apache web server
  • DNS
  • NFS server
Answer : -
(i) Apache Web Server
Apache is the most widely used Web Server application in UNIX-like operating systems but can be used on almost all platforms such as Windows, OS X, etc.
The reasons behind its popularity are :
  • It is free to download and install.
  • It is open source - the source code is visible to anyone and everyone, which basically enables anyone to adjust the code, optimize it, and fix errors and security holes. People can add new features and write new modules.
  • It can be used for small websites of one or two pages, or huge websites of hundreds and thousands of pages, serving millions of regular visitors each month. It can serve both static and dynamic content.

(ii) DNS
Domain Name System is an Internet service that translates domain names into IP addresses. Since domain names are alphabetic, they are easier to remember. The Internet however, is really based on IP addresses. Every time you use a domain name, therefore, a DNS service must translate the name into the corresponding IP address. For example, the domain name www.------.com might translate to 198.60.136.3
A domain name is divided into three part -
  • Top Level Domain - A top-level domain recognizes a certain element regarding the associated website, such as its objective (business, government, education), its owner, or the geographical area from which it originated.
    Generic Top Level Domains
    TDLDescription
    .comCommercial
    .eduEducation
    .govU.S. national and state government agencies
    .intInternational Organizations
    .milU.S. military
    .netNetwork
    .orgOrganization
    Country-Code Top Level Domains
    TDLDescription
    .inIndia
    .ruRussia
    .deGermany
    .auAustralia
    .ukUnited Kingdom
    .cnChina
    .brBrazil
    .frFrance
    etc.
    Sponsored Top Level Domains
    TDLDescription
    .aeroMembers of the air-transport industry
    .coopCooperative associations
    .jobsHuman resource managers
    .museumMuseums
    .postPostal services
    .telFor businesses and individuals to publish contact data
    .travelTravel agents, airlines, tourism bureaus, etc.
    .mobiProviders and consumers of mobile products and services
    .catCatalan linguistic and cultural community
    .xxxPornographic sites


(iii) NFS Server
A network file system (NFS) is a type of file system mechanism that enables the storage and retrieval of data from multiple disks and directories across a shared network. It enables local users to access remote data and files in the same way they are accessed locally. NFS was initially developed by Sun Microsystems.



Q7. What are the purposes of dynamic addressing and directory services in Windows 2000? How are they configured?
Answer : -
Windows 2000 runs on TCP/IP, and to utilize AD, you must forsake the older Windows Internet Naming Service (WINS) technology in favor of DNS. WINS can still be useful on Windows 2000 networks, especially if you have non-Windows 2000 or XP clients. However, Active Directory will not use WINS for name resolution.The biggest downside to DNS has been that although distributed, it was still designed as a system that requires manual updates. Whenever a new host is added to a domain, an administrator needs to manually update the zone database on the primary DNS server to reference the new host. If there are secondary name servers on the network, the changes are replicated in a zone transfer.
However, a dynamic updating feature proposed in RFC 2136 provides the means for updating a zone's primary server automatically. Windows 2000 supports this new dynamic DNS, or simply DDNS. The caveat is that it works only with Windows 2000 clients, and older Windows NT, Windows 9X, and non-Windows clients still require manual updates, or at minimum, a Windows 2000 Dynamic Host Configuration Protocol (DHCP) proxy to act on their behalf during the dynamic registration process. When DDNS is enabled and Windows 2000 boots up and contacts a DHCP proxy for IP addressing information, it automatically sends an update to the name server it has been configured to use, adding its Address (A) resource record. This greatly simplifies the administration of DNS on a Windows 2000 network. In addition, DDNS simplifies the administration of AD by allowing domain controllers (DCs) to automatically register their Service (SRV) resource records into DNS without administrator intervention. It is important to note that, by default, DHCP will not automatically update DNS; you have to configure DHCP explicitly to update DNS by editing its properties and checking the box labeled Enable Updates for Clients That Do Not Support Dynamic Updates.


Q8. What is the purpose of VPN and name VPN technologies supported by Windows 2000?
Answer : - A virtual private network, or VPN, is an encrypted connection over the Internet from a device to a network. The encrypted connection helps ensure that sensitive data is safely transmitted. It prevents unauthorized people from eavesdropping on the traffic and allows the user to conduct work remotely. VPN technology is widely used in corporate environments.
A VPN extends a corporate network through encrypted connections made over the Internet. Because the traffic is encrypted between the device and the network, traffic remains private as it travels. An employee can work outside the office and still securely connect to the corporate network. Even smartphones and tablets can connect through a VPN.
Types of VPNs
  • Remote access - A remote access VPN securely connects a device outside the corporate office. These devices are known as endpoints and may be laptops, tablets, or smartphones. Advances in VPN technology have allowed security checks to be conducted on endpoints to make sure they meet a certain posture before connecting. Think of remote access as computer to network.
  • Site-to-Site - A Site-to-Site VPN connects the corporate office to branch offices over the Internet. Site-to-Site VPNs are used when distance makes it impractical to have direct network connections between these offices. Dedicated equipment is used to establish and maintain a connection. Think of Site-to-Site access as network to network.
Virtual Private Network Protocols
  • PPTP - Point to Point Tunneling Protocol (PPTP) is one of the oldest protocol by Microsoft. It is the fastest of all VPN protocols. It is ideal for applications where speed is important such as streaming and gaming. But, PPTP is not as secure because of its weak encryption. PPTP uses the TCP port 1723 for communication.
  • L2TP/IPsec - L2TP over IPsec is more secure than PPTP and offers more features. L2TP/IPsec is a way of implementing two protocols together in order to gain the best features of each.
    The data transmitted via the L2TP/IPSec protocol is usually authenticated twice. Each data packet transmitted via the tunnel includes L2TP headers. As a result, the data is de-multiplexed by the server. The double authentication of the data slows down performance, but it does provide the highest security.



Q9. What are Intrusion Detection Systems(IDS)? What IDS can do?
Answer : - An Intrusion Detection System (IDS) is a type of security software designed to automatically alert administrators when someone or something is trying to compromise information system through malicious activities or through security policy violations.
IDS is basically classified into two types :
  • Network Intrusion Detection System (NIDS)
    Network intrusion detection systems (NIDS) are set up at a planned point within the network to examine traffic from all devices on the network. It performs an observation of passing traffic on the entire  subnet and matches the traffic that is passed on the subnets to the collection of known attacks. Once an attack is identified or abnormal behavior is observed, the alert can be sent to the administrator. An example of an NIDS is installing it on the subnet where firewalls are located in order to see if someone is trying crack the firewall.
  • Host Intrusion Detection System (HIDS)
    Host intrusion detection systems (HIDS) run on independent hosts or devices on the network. A HIDS monitors the incoming and outgoing packets from the device only and will alert the administrator if suspicious or malicious activity is detected. It takes a snapshot of existing system files and compares it with the previous snapshot. If the analytical system files were edited or deleted, an alert is sent to the administrator to investigate.
Detection Method of IDS -
  • Signature-based Method - Signature-based IDS detects the attacks on the basis of the specific patterns such as number of bytes or number of 1’s or number of 0’s in the network traffic. It also detects on the basis of the already known malicious instruction sequence that is used by the malware. The detected patterns in the IDS are known as signatures.
    Signature-based IDS can easily detect the attacks whose pattern (signature) already exists in system but it is quite difficult to detect the new malware attacks as their pattern (signature) is not known.
  • Anomaly-based Method - Anomaly-based IDS was introduced to detect the unknown malware attacks as new malware are developed rapidly. In anomaly-based IDS there is use of machine learning to create a trustful activity model and anything coming is compared with that model and it is declared suspicious if it is not found in model. Machine learning based method has a better generalized property in comparison to signature-based IDS as these models can be trained according to the applications and hardware configurations.



Q10. Elaborate the primary aspects of firewall? What are the limitations of firewall?
Answer : - A firewall is a network security system designed to prevent unauthorized access to or from a private network. Firewalls can be implemented as both hardware and software, or a combination of both. Network firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.
Firewalls can be either hardware or software but the ideal configuration will consist of both. In addition to limiting access to your computer and network, a firewall is also useful for allowing remote access to a private network through secure authentication certificates and logins.

Firewall Filtering Techniques
Firewalls are used to protect both home and corporate networks. A typical firewall program or hardware device filters all information coming through the Internet to your network or computer system. There are several types of firewall techniques that will prevent potentially harmful information from getting through :
  • Packet Filter - Looks at each packet entering or leaving the network and accepts or rejects it based on user-defined rules. Packet filtering is fairly effective and transparent to users, but it is difficult to configure. In addition, it is susceptible to IP spoofing.
  • Application Gateway - Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation.
  • Circuit-level Gateway - Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking.
  • Proxy Server - Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses.
In practice, many firewalls use two or more of these techniques in concert. A firewall is considered a first line of defense in protecting private information. For greater security, data can be encrypted.

Limitations of Firewall
  • Firewalls cannot protect against what has been authorized.
  • It cannot stop attacks if the traffic does not pass through them.
  • They are only as effective as the rules they are configured to enforce.
  • The firewall cannot protect against the transfer of virus-infected programs or files. Because of the variety of operating systems and applications supported inside the perimeter, it would be impractical and perhaps impossible for the firewall to scan all incoming files, e-mail, and messages for viruses.
  • The firewall does not protect against internal threats, such as a disgruntled employee or an employee who unwittingly cooperates with an external attacker.



Q11. How does RAID support fault tolerance systems? How is it implemented in Windows 2000?
Answer : - Fault-tolerance is the ability to survive of one or several disk failures.
RAID is a technology that is used to increase the performance and/or reliability of data storage. RAID stands for Redundant Array of Independent Disks. A RAID system consists of two or more drives working in parallel. There are different RAID levels, each optimized for a specific situation.
  • RAID 0 – striping
  • RAID 1 – mirroring
  • RAID 5 – striping with parity
  • RAID 6 – striping with double parity
  • RAID 10 – combining mirroring and striping

RAID 0 – striping
In a RAID 0 system data are split up into blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk.
Advantages
  • RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
  • All storage capacity is used, there is no overhead.
Disadvantages
  • RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

RAID 1 – mirroring
Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of mirror drives). If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.
Advantages
  • RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
  • In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive.
Disadvantages
  • The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.
  • Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping.

RAID 5 – striping with parity
RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16 drives. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data.
Advantages
  • Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
  • If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.
Disadvantages
  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever.

RAID 6 – striping with double parity
RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.
Advantages
  • Like with RAID 5, read data transactions are very fast.
  • If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5.
Disadvantages
  • Write data transactions are slower than RAID 5 due to the additional parity data that have to be calculated.
  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. Rebuilding an array in which one drive failed can take a long time.

RAID 10 – combining mirroring and striping
It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.
Advantages
  • If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is very fast since all that is needed is copying all the data from the surviving mirror to a new drive.
Disadvantages
  • Half of the storage capacity goes to mirroring, so compared to large RAID 5 or RAID 6 arrays, this is an expensive way to have redundancy.



Q12. List the different types of malicious code and compare the virus protection tools.
Answer : - Malicious software is any software that the user did not authorize to be loaded or software that collects data about a user without their permission.
Different Types of Malicious Software
  • Spyware - Spyware is any technology that aids in gathering information about a person or organization without their knowledge. They can monitor and log the activity performed on a target system, like log key strokes, or gather credit card and other information.
  • Virus - A computer virus is a piece of software that can 'infect' a computer, install itself and copy itself to other computers, without the users knowledge or permission. It usually attaches itself to other computer programs, data files, or the boot sector of a Hard drive.
  • Worm - Unlike a virus, a worm, is a standalone piece of malicious software that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security flaws on the target system to allow access.
  • Trojan (Trojan Horse) - A type of malware that uses malicious code to install software that seems ok, but is hidden to create back doors into a system typically causing loss or theft of data from an external source.
  • Adware - Adware is software which can automatically causes pop-up and banner adverts to be displayed in order to generate revenue for its author or publisher. A lot of freeware will use Adware but not always in a malicious way, if it was malicious, it would then be classed as spyware or malware.
  • Rootkit - Rootkit Virus assists a hacker in remotely accessing or controlling a computing device or network without being exposed. They are hard to detect due to the reason that they become active even before the system’s OS is booted up.



Q13. What is Kerberos? Describe Kerberos management in Windows operating system.
Answer : - Kerberos is a ticketing-based authentication system, based on the use of symmetric keys. Kerberos uses tickets to provide authentication to resources instead of passwords. This eliminates the threat of password stealing via network sniffing. One of the biggest benefits of Kerberos is its ability to provide single sign-on (SSO). Once you log into your Kerberos environment, you will be automatically logged into other applications in the environment.
To help provide a secure environment, Kerberos makes use of Mutual Authentication. In Mutual Authentication, both the server and the client must be authenticated. The client knows that the server can be trusted, and the server knows that the client can be trusted. This authentication helps prevent man-in-the-middle attacks and spoofing. Kerberos is also time sensitive. The tickets in a Kerberos environment must be renewed periodically or they will expire.
When a user needs to access a service protected by Kerberos, the Kerberos protocol process can be divided into two phases - User Identity Authentication and Service Access.
User Identity Authentication - User authentication is a process of checking validity of identity information provided by users in the Kerberos authentication service. Identity information can be user names and passwords or information that can provide real identities in other forms. If user information passes the validity check, the Kerberos authentication service returns a valid Ticket-Granting Ticket (TGT) token, proving that the user has passed identity authentication. The user uses the TGT in the subsequent Service Access process.
Service Access - When the user needs to access a service, the user requests the Ticket-Granting Service (TGS) from the Kerberos server based on the TGT obtained in the first phase, providing the name of the service to be accessed. TGS checks the TGT and information about the service to be accessed. After the information passes the check, TGS returns a Service-Granting Ticket (SGT) token to the user. The user requests the component service based on the SGT and related user authentication information. The component service decrypts the SGT information in a symmetrical way and finishes user authentication. If the user passes the authentication process, the user can successfully access related resources of the service.

Kerberos in Windows Systems
Kerberos is very prevalent in the Windows environment. In fact, Windows 2000 and later use Kerberos as the default method of authentication. When you install your Active Directory domain, the domain controller is also the Key Distribution Center. In order to use Kerberos in a Windows environment, your client system must be a part of the Windows domain. Kerberos is used when accessing file servers, Web servers, and other network resources. When you attempt to access a Web server, Windows will try to sign you in using Kerberos. If Kerberos authentication does not work, then the system will fall back to NTLM authentication.



Q14. Define IPSec? What are its features? Discuss its implementation in Windows 2000.
Answer : - Exchanging sensitive information across a network, especially a public network, requires a security method that will protect the data in transit. That’s where Internet Protocol Security (IPSec) comes in. IPSec is a set of protocols that allows you to sign and encrypt data to be sent across an IP network, and authenticate and decrypt the protected packets on the receiving end. Windows 2000 Professional and Server include IPSec.

What is IPSec ?
IPSec is a set of protocols and cryptography-based services that work together to protect data from unauthorized access or tampering when it is sent across an IP network. IPSec provides three basic services :
  • Authentication - Confirmation of the origin of the IP packet; verification that the purported sender actually sent it.
  • Integrity - “Signing” of the packet to ensure that the data has not been changed in any way between the time it left the sender and the time it was received at the authorized destination.
  • Confidentiality - Encryption of the data to render is unreadable without the correct key.

Implementation of IPSec in Windows 2000
Windows 2000’s implementation of IPSec provides a high level of security, using a combination of algorithms and keys to encrypt the data so that it will be unreadable if intercepted along its route. IPSec uses two protocols to accomplish these tasks :
  • Authentication Header (AH) - This protocol provides authentication services for IPsec. It allows the recipient of a message to verify the identity of the sender. It also allows the recipient to verify that intermediate devices haven’t changed any of the data in the datagram. It also provides protection against so-called replay attacks, whereby a message is captured by an unauthorized user and resent.
  • Encapsulating Security Payload (ESP) - AH provides integrity authentication services to IPsec-capable devices so that they can verify that messages are received intact from other devices. For many applications, however, this is only one piece of the puzzle. We want to not only protect against intermediate devices changing the data grams, but also to protect against them examining their contents as well. For this level of private communication, AH is not enough; we need to use the ESP protocol.
    The main job of ESP is to provide the privacy we seek for IP data grams by encrypting them. An encryption algorithm combines the data in the datagram with a key to transform it into an encrypted form. This is then repackaged using a special format, and then transmitted to the destination, which decrypts it using the same algorithm.