Make-over test, CIS 451 Oct 27, 2003, 11:30 - 13:00. Kupf 203. Bring Pencils, Pens (black and blue only), Calculator. Use the paper provided by Dr Ott. Closed Book. Get rid of books, notes, laptops. Make sure to give intermediate results if you want partial credit for partially incorrect responses. 1A. What does VoIP stand for? Voice over IP, or Voice over Internet Protocol. That last name is not logical, so ``Voice over Internet'' also is OK. 1B. What does OSI stand for? Open Systems Interconnection. 2A. A deep-space probe sends a signal back to earth. The frequencies it can use are from 1 GHz to 1.001 GHz. The signal-to-noise ratio is 5 dB. Compute the theoretically highest possible information rate this connection can achieve. H*log(1 + S/N) (log base 2). 10 log (S/N) = 5 (log base 10), so log(S/N) = .5 = 1/2, so S/N = 10^(1/2) = 3.16227766 , log(1 + S/N) = log(4.16227766) (log base 2) = = (ln(4.16227766))/(ln(2)) = 2.057373209. H = 1,001,000,000 - 1,000,000,000 = 1,000,000 . So the maximal possible data rate is 2,057,373 bits/sec. 2B. What result did you use in 2A (name of person). Shannon. 2C. Same as 2A. Only, now the connection is real noisy: the signal-to-noise ration is -5 dB. What is the theoretically highest possible data rate? this time 10 log(S/N) = -5 (log base 10), so log(S/N) = -.5, S/N = 10^(-.5) = .316227766 (hey! exactly a factor 10 difference!). log(1 + S/N) = log(1.316227766) (log base 2) = .396409409161 . Times 10^(6) gives 396,409 bits/sec. 3. You must build a sound transmission (and storage) sytem for elephants. You are not interested in representing their trumpeting sounds, only in capturing and representing their low frequency rumblings. (1 - 500 Hz). Your transmission system has a capacity for 160 Kbits/sec, and you have decided to use all of that capacity. Assuming you do not use any fancy coding: 3A. What is your sampling rate? The highest frequency you want to regenerate is 500 Hz, so you sample 1000 times per second. 3B. What result did you use in 3A? (name of person). Give a BRIEF explanation of how you reached your conclusion. This is Nyquist's result. If you sample at less than 1000 times a second you will be unable to re-generate freuencies just below 500 Hz. If you sample at more than 1000 times a second you will spend bandwidth (and money, etc) regenerating information you are not really interested in. 3C. How many bits per sample? At 1000 samples/sec amd 160,000 bits a second you get 160 bits per sample. Or 20 Bytes per sample. 3D. How many levels per sample does that represent? This is 2 to the power 160 levels per sample. (2^(160)). (Silly, but that is what the assumptions lead to). 4. You are using the IP (Internet Protocol) Checksum for Error Detection. Only, you do it based on 8 bit words, not 16 bits as in the real Internet. You must send the message 0011 1100 1010 0101 1001 4A. Show what the sender does to create the checksum. Compute the checksum. Add zeros to get an integer number times 8 bits: 0011 1100 1010 0101 1001 0000 Add the three 8-bit ``words'' using one's complement: gives 0111 0010 . Take one's complement: 1000 1101 . That is the checksum. 4B. Assume no error occurs. Show what the destination does to verify no error occurs. The destination adds the four words 1000 1101 , 0011 1100 , 1010 0101 , 1001 0000 (one's compl) and checks that the result is 1111 1111 . 5. Two computers, S (Source) and D (Destination) are communicating over a dedicated link. The communication is full duplex. The bandwidth of the link is 32 Mbit/sec. The length of the link is 100 km. The speed at which the signal propagates on the link is (2/3)*c. Practically speaking, S sends only dataframes of 1000 Bytes long, and D sends only acknowledgement frames of 50 Bytes long. 5A. Compute and give the serialization delay of a data frame. A 1000 Byte frame is 8000 bits. The serialization delay is 8000/32,000,000 sec = .00025 sec = .25 msec. (A quarter of a msec). 5B. Compute and give the propagation delay of the link. The speed of propagation is (2/3)*300,000 km/sec = 200,000 km/sec. For a 100 km link the propagation delay is 100/200,000 = .0005 sec = .5 msec. 5C. What is the total delay of a data frame? The total delay of a data frame is .5 msec + .25 msec = .75 msec. Suppose S and D use a sliding window protocol. 5D. What window size (expressed in frames) do you recommend S use? Why? (in this case, only the ``why'' matters). We start by making an assumption: the computers S and D have ``infinitely fast'' processors. We also assume that S always has stuff to send. We want to make the Window W large enough that under normal operation (no lost or damaged frames) computer S always has a new acknowledgement in time, so it does not let bandwidth go unused. That means (see the figure, in a .pdf file): If at some point in time t S starts sending a data frame, then .25 msec later the tail of that frame disappears from S's output port (and S can start sending out another data frame, if the window is large enough). .5 msec after t the ``nose'' of the first frame starts arriving at D, and .75 msec after t computer D has the whole of that frame. At that point, D can send an ACK for that frame. Since D is so fast, at time t + .75 (msec) it starts sending the ACK. The serialization delay of the ACK is 400/32,000,000 = .0125 msec. (one twentieth of that of a data frame). So S gets the ACK .75 + .0125 + .5 = 1.2625 msec after sending the original dataframe. The window must be large enough that it can send data frames (can start sending a frame) at times t, t + .25, t + .5, t + .75, t + 1.0, and t + 1.25 msec. Then, at time t + 1.2625 it gets an ACK for the original dataframe. That ACK gives it the right to send another dataframe (that will happen at time t + 1.5 msec). Conclusion: W must be equal to 6 frames. Then S sends data frames at times t , t + .25 , t + .5, t + .75 , t + 1.0 , t + 1.25 , t + 1.5 , etc. The bandwidth of the link from S to D is fully utilized. The ACK for the dataframe sent at time t arrives at the source at time t + 1.2625, in time for the dataframe to be sent at time t + 1.5 (msec). From time t + 1.25 msec until time t + 1.2625 msec, S has 6 unacknowledged dataframes. Then, from time t + 1.2625 until time t + 1.5 it has only 5 unacknowledged dataframes. Then it sends one, so the number of unacknowledged frames goes up to 6 again. (Actually: at time t + 1.2625, S generates a new dataframe and puts it in the output buffer. At time t = 1.5 msec output of that dataframe starts.) At time t = 1.5125 the dataframe sent at t + .25 gets acknowledged, etc. This schedule has a little bit of slack: 1.5 - 1.2625 = .2375 msec worth. If the computers D and S (together) need more time than that to generate an ACK after getting a dataframe, and to translate an arriving ACK into permission to send another dataframe, a larger Window is advisable. But not much more: If the window is too large, recovery from loss can become complicated. The window should be ``just large enough'' to maximize throughput under normal operation, not larger than that. If the window were only 5 frames, frame 6 could not start being sent out at t + 1.25. Instead it would have to wait until the ACK for frame 1 is completely inside S. That is, frame 6 would be given to the output buffer in S at t + 1.265. This gives a small amount of wasted bandwidth. (Less throughput). An even smaller window wasts more bandwidth. As long as W .LE. 5 dataframes (and the processors are infinitely fast), the bandwidth utilization on the link from S to D is (W*.25)/(1.265). Please note: Assuming the processors are indeed ``infinitely fast'', if we increase the window beyond 6 frames, the extra frame(s) (in the window) will be sitting in the output buffer of S, but will not increase throughput. As long as W .GE. 6 (and the processors are ``fast enough'') the bandwidth utilization of the link from S to D is 100%. W=6 is the smallest window size that achieves the maximal throughput.