#import "../template/lib.typ": * #set page(paper: "a4") #show: notes.with( title: [EE2T1], subtitle: [Telecommunication and Sensing], author: "Folkert Kevelam" ) = Lecture 1 - Introduction == Information #definition[ Information content: Information is related to probability: a less probable message contains more information $ I_j = log_2(1/P_j) = - log_2(P_j) space "[bit]" $ Information is additive: $ I_(i j) &= log_2(1/(P_i P_J)) = -log_2(P_i)-log_2(P_j) \ &= I_i + I_j $ iff the messages are independent. ] #definition[ Source entropy, the average amount of information per message generated by a source: $ H = sum_(j=1)^M P_j I_j = sum_(j=1)^M P_j log_2(1/P_j) space "[bit/symbol]" $ In a binary system, the maximum source entropy will be when $P_1 = P_0 = 0.5$. The speed of a source: $ R = H/T space "[bit/s]" $ ] #theorem[ Shannon-Hartley theorem: $ C = B dot log_2( 1 + S/N) space "[bit/s]" $ - $C = "capacity [bit/s]"$ - $B = "bandwidth [Hz]"$ - $S/N = "ratio of signal power to the noise power"$ ] == Principles of range measurement #definition[ The transmitter "fires" a signal and the receiver measures the time delay $tau$ between the moments of transmission and reception of the echo. $ 2R = c dot tau arrow R = (c dot tau) / 2 $ with $c$ being the speed of light. ] #definition[ The ability of a radar to resolve two targets with a range difference $delta R$ is called *range resolution*. $ delta R = (c dot tau_p) / 2 approx c/(2 B) space "[m]" $ ] == Modulation #definition[ Modulation: *manipulation of a signal waveform* to carry information, in order to transmit the signal at a specified frequency in the spectrum. $ s(t) = R(t)cos(2 pi f_c t + phi(t)) $ with $ R(t) = L{m(t)} space "linear modulation" \ phi(t) = L{m(t)} space "angle modulation" $ ] == Practical signal waveforms + DC-value, mean value: $ w_(D C) = = lim_(T arrow infinity) 1/T integral_(-T/2)^(T/2) w(t) d t $ + Instantaneous power: $ p(t) = v(t) dot i(t) $ + Average power: $ P= = $ + RMS-value: root-mean-square: $ w_(r m s) = sqrt() $ For a resistive load: $ P = v_(r m s) i_(r m s) = ()/R = R $ + Normalized power = power delivered to a $1 omega$ load. $ P = = lim_(T arrow infinity) 1/T integral_(-T/2)^(T/2) w^2(t) d t space "[W] = [J/s]" $ $w(t)$ is a *power waveform* iff $0 < P < infinity$. + Normalized energy = energy dissipated in a $1 omega$ load. $ E = lim_(T arrow infinity) integral_(-T/2)^(T/2) w^2(t) d t space "[J]" $ $w(t)$ is an *energy waveform* iff $0 < E < infinity$. A signal waveform $w(t)$ cannot both be an *energy waveform* and a *power waveform*. Practical waveforms are always energy waveforms. = Lecture 2 == Distortion-free transmission Requirements for *distortion-free transmission*: + $y(t) = A dot x(t - T_d) space |A| > 0, T_d >=0$ + $Y(f) = A dot X(f)e^(-2 pi j f T_d)$ So $H(f) = (Y(f))/(X(f)) = A e^(-2 pi j f T_d) arrow phi(f) = -2 pi f T_d$ Over the frequency band which contains the signal - the response should be flat: $|H(f)| = |A| > 0$ - the phase decreses linear with frequency => constant delay for all frequency components: $T_d = -1/(2 pi f) "ang"{H(f)} = - 1/(2 pi f) phi(f)$ === Time-variant systems: mobile multipath channel $h(t) = A_0 delta(t- T_0) - A_1 delta(t-T_1)$\ $H(f) = A_0 e^(-2 pi j f T_0) - A_1 e^(-2 pi j f T_1)$ When moving, $T_0$ and $T_1$ change and so does $H(f)$. == Bandwidths definitions - *absolute bandwidth* - the bandiwdth between the frequencies with a gain of $-infinity$. - *-3dB bandiwdth* - frequencies between -3 dB Gain - *Equivalent noise bandwidth* - $ B_eq = integral_0^infinity (|H(f)|^2)/(|H(0)|^2) d f $ - *Null- or null-to-null bandwidth* - the frequency bandwidth between the first $-infinity$ gain points. - *Bounded spectrum bandwidth* - $ (P(f))/(P(0)) = 10^(-x/10) $ - *Power bandwidth* - $ integral_0^(B_(99 %)) P(f) d f = 0.99 integral_0^(infinity) P(f)d f $ == Band limited signals and noise + band-limited signals allow for multiplexing in the frequency domain + band-limited signals can be completely represented by a set of discrete-time sample values A waveform $w(t)$ is said to be absolutely band-limited, if: $W(f) = cal(F){w(t)} = 0 space "for" space |f| >= B_0$ \ and absolutely time-limited, if: $w(t) = 0 space "for" space |t| > T_0$ \ *absolutely time-limited signals cannot be also absolutely band-limited and vice versa*. Uncertainty relation for the *time-bandwidth product*: $ B_0 dot T_0 >= 1/2 space "or" \ B_0 = alpha / T_0 space "with" alpha >= 1/2 $ === Sampling theorem Every physical signal $w(t)$ can be expressed as: $ w(t) = sum_(n=-infinity)^infinity alpha_n phi_n(t) $ with sample function: $ phi_n(t) = (sin(pi f_s (t- n T_s)))/(pi f_s (t - n T_s)) = (sin(pi f_s ((t-n)/f_s)))/(pi f_s ((t-n)/f_s)) = sinc(f_s (t-n T_s)) $ and coefficients: $ &alpha_n = f_s integral_(-infinity)^infinity w(t)phi_n(t) d t space "and" \ &f_s integral_(-infinity)^infinity phi_m(t) phi_n(t) d t = cases( 1 space "for" m=n, 0 space "for" m eq.not n) $ If the signal $w(t)$ is band-limited in $B$ [Hz] and the sample frequency $ f_s >= 2 B $ then the set ${a_k}$ is a complete representation of the signal $w(t)$, with $ a_k = w(k / f_s) = w(k T_s) $ The lowest possible sample frequency: $f_s = 2B$ is called the *Nyquist frequency* The minimum number of samples required to represent a time-continuous signal $w(t)$ with a bandwidth of $B$ [Hz] over a period $T_0$ is equal to: $ N = 2B dot t_0 $ where $N$ is the number of dimensions needed to describe the waveform $w(t)$ with a bandwidth $B$ over the period $T_0$. In practice we need more samples due to the unvertainty relation. === Ideal sampling In ideal sampling we use the $delta$-function $ w_s(t) = w(t) sum_(n=-infinity)^infinity 1/T_s e^(2 pi j n f_s t) \ W_s(f) = f_s sum_(n=-infinity)^infinity W(f- n f_s) $ = Lecture 3 - Received signal power in a wireless link == Travelling EM wave - parameters *Wavelength* : $lambda = c / f$ with $c approx 3 dot 10^8$ m/s *Radiation intensity* : $U = P_t / (4 pi R^2)$ with $P_t$ is the total transmit power and $R$ is the distance to the observer. === Antenna gain definition $ G(theta, phi) = 94 pi^(U(theta, phi))) / P_in $ + Radiated power intensity at a distance $R$ for an isotropic radiator $ U = P_t / (4 pi R^2) $ + Radiated power intensity at a distance $R$ for a radiator with gain $G$: $ U = P_t / (4 pi R^2) G_t $ + Effective Isotropic Radiated Power: $ "EIRP" = P_t dot G_t $ === Antenna's effective area *effective area* $A_e$: $ A_e = P_L / U_(i n) $ $P_L$ - Power delivered to the load (W) \ $U_(i n)$ - Power intensity of the incident wave (W/m2) === Received power $ P_r = P_t / (4 pi R^2) G_t dot A_e $ == Reciprocity *effective area* is related to the antenna *gain*: $ A_e = G (lambda^2)/(4 pi) arrow G = 4 pi A_e / (lambda^2) $ Where\ $G$ - antenna gain $lambda$ - wavelength $A_e = eta A$ - Isotropic - Effective Area: $lambda^2 / (4 pi)$ - Infinitesimal dipole or loop - Effective Area: $(1.5 lambda^2) / (4 pi)$ - Half-wave dipole - Effective Area: $(1.64 lambda^2)/(4 pi)$ - Horn - Effective Area: $(eta = 0.81) arrow 0.81 A$ - Parabola - Effective Area: $(eta = 0.56) arrow 0.56 A$ == Wireless link budget $ P_r / P_t = (lambda/(4 pi R))^2 G_t dot G_r $ Loss free space- $ L_(F S) = ((4 pi R)/lambda)^2 $ == Transmission line propagation losses Loss transmission line: $ L_(T L) = (P(z=0))/(P(z=l)) = e^(2 gamma l) $ == Radar equation - An omnidirectional transmitted power $P_t$, induces a power intensity at range $R$ of: $ U = P_t / (4 pi R^2) $ - The radar transmit antenna has a gain $G_t$, therefore the power intensity is: $ U = (P_t G_t) / (4 pi R^2) $ - At range $R$ a target with the radar cross section (RCS) $sigma$ reflects a small portion of the power backward to the radar. Re-radiated power: $ P = (P_t G_t sigma) / (4 pi R^2) $ - Received power is $ P_r = (P_t G_t G_r lambda^2 sigma)/((4 pi)^3 R^4) $ = Lecture 4 == Thermal Noise $ V_V(f) approx sqrt(4 R k_b T) \ I_V(f) approx sqrt((4 k_b T)/R) \ V_(r m s) = sqrt(4 R k_b T B_n) $ The delivered power to a matched ($R_L = R$) load noise power will be based on half of the resistor noise voltage: $ V_(L r m s) = sqrt(k_B T B_n R) $ The *available* load noise power $ P_a = k_b T B_n $ The *noise power spectral density* equals $ P_a(f) = (V_L^2 (f))/R = k_B T $ == Equivalent noise temperature $ T_n = P_a / (k_B B_n) $ == Noise Characterization of a linear device $ P_(a o) = G_a P_(n_(i n)) + P_(e x) \ P_(a o) = G_a (P_(n_(i n)) + P_e) \ P_e = P_(e x) / G_a $ == Noise Figure $ F = ("output noise power actual device")/("output power ideal component") $ at $T_0 = 290 k$ $ F = (k_b (T_0 + T_e)G_a B_n)/(k_B T_0 G_a B_n) = (T_0 + T_e) / T_0 = 1 + T_e / T_0 >= 1 \ F_(d B) = 10 dot log_(10)(1 + T_e / T_0) >= 0 d B $ == Noise Figure of a transmission line $ L_(T L ) = (P(z=0))/(P(z=l)) = e^(2 gamma l) \ L_(T L, d B) = 20 gamma l log_10(e) = alpha l $ $ T_e = (L-1)T_0 $ == Cascaded devices $ T_e = T_(e 1) + (T_(e 2))/G_(a 1) + (T_(e 3))/(G_(a 1) G_(a 2)) + (T_(e 4))/(G_(a 1) G_(a 2) G_(a 3)) \ F = F_1 + (F_2 - 1)/G_(a 1) + (F_3 - 1)/(G_(a 1)G_(a 2)) $ = Lecture 5 == Environmental noise received by antenna Total Noise at the receive antenna due to environmental noise: $ T_(A E) &= (integral_(Omega=0)^(Omega=4 pi) T_b(Omega) G(Omega d Omega))/(integral_(Omega=0)^(Omega=4 pi)G(Omega) d Omega) \ &= (integral_(phi=0)^(phi=2 pi) integral_(theta=0)^(theta=pi) T_b (theta, phi)G(theta,phi)sin(theta)d theta d phi)/(integral_(phi=0)^(phi=2 pi) integral_(theta=0)^(theta=pi) G(theta, phi) sin(theta) d theta d phi) $ Where: - $G$ - antenna gain - $T_b$ - brightness temperature of different environmental sources == Total Antenna noise Total *antenna noise temperature* at the terminal is $ T_a = e_A T_(A E) + (1-e_A) T_p $ where: - $T_(A E)$ is the total captured brightness temperatures of different sources - $T_p$ is the physical temperature (290K) - $e_A$ is the thermal efficiency of the antenna The total *Noise figure* of the antenna is: $ F_a = 1 + T_a / T_0 \ F_(a, d B) = 10 log_10 (1 + T_a / T_o) $ == SNR in cable and in free-space SNR in cable: $ "SNR"_("cable") = P_("out")/ N_("out") = P_("in") / (k_b B_n (L_(T L) - 1)T_0) $ SNR in free space: $ "SNR"_("FS") = P_("FS","out") / N_("FS","out") = (P_("in") lambda G_(T X) G_(R X)) / (k_B B_n (4 pi d)^2 T_a) $ == Link Budget $ (S/N)_("Det") = (S/N)_("RX") &= (P_(E I R P) dot G_(F S) dot G_(A R))/(k_B dot T_(s y s) dot B_n) = (P_(E I R P) dot G_(F S) dot G_(A R))/(k_B dot (F-1)T_0 dot B_n) \ &= (P_(T X) dot G_(A T) dot G_(A R)) / (L_(F S) dot k_B (T_(A R) + T_e) B_n) $ == Communication range equation $ R_("max") = ((P_t G_t G_r lambda^2)/((4 pi)^2 P_(r "min")))^(1/2) $ where - $P_t$ is the transmit power - $G_t$ is the transmit antenna gain - $G_r$ is the receive antenna gain - $lambda$ is the wavelength $ "SNR" = S_0 / N = P_r / (F-1) k_B T_0 B_n \ P_(r "min") = (F-1) k_b T_0 B_n "SNR"_("min") $ The maximum operation range can be determined by combining the two: $ R_("max") = ((P_t G_t G_r lambda^2)/((4 pi)^2 "SNR"_("min")(F-1)k_B T_0 B_n))^(1/2) $ == Radar range equation $ R_("max") = ((P_t G_t G_r lambda^2 sigma)/((4 pi)^3 P_(r "min")))^(1/4) \ R_("max") = ((P_t G_t G_r lambda^2 sigma)/((4 pi)^3 "SNR"_("min")(F-1)k_B T_0 B_n))^(1/4) $ = Pulse amplitude and pulse code modulation == Sampling theorem every physical signal can be expressed as: $ w(t) = sum_(n=-infinity)^infinity a_n phi_n (t) $ with sample function $ phi_n(t) = sinc(f_s(t - n T_s)) $ and coefficients $ a_n = f_s integral_(-infinity)^infinity w(t) phi_n (t) d t \ f_s integral_(-infinity)^infinity phi_m (t) phi_n (t) d t = cases(1 space "for" m=n, 0 space "for" m eq.not n) $ === Ideal sampling $ w_s(t) = w(t) sum_(k=-infinity)^infinity delta (t - k T_s) \ = sum_(k=-infinity)^infinity w(k T_s)delta (t - k T_s) = w(t) sum_(n=-infinity)^infinity f_s e^( 2 pi j n f_s t) $ with $T_s = 1/f_s, f_s >= 2B$ $ W_s(f) = cal(F){w_s(t)} = f_s sum_(n=-infinity)^infinity W(f - n f_s) $ === Natural sampling $f_(s, "min") = 2 B space "and" space f_s >= 2B$ $ w_s (t) = w(t) dot s(t) \ s(t) = sum_(k=-infinity)^infinity Pi((t-k T_s)/tau) = sum_(n=-infinity)^infinity c_n e^(j 2 pi n f_s t)\ W_s(f) = sum_(n=-infinity)^infinity c_n W(f- n f_s) \ c_n = d (sin(n pi d))/(n pi d) = f_s tau sinc(n f_s tau) \ d = tau / T_s = f_s tau \ S(f) = cal(F){s(t)} = sum_(n=-infinity)^infinity c_n delta(f - n f_s) $ === Signal recovery + lowpass filter with $B < f_("cut-off") < f_s - B$ + down-converting the spectrum $W(f-n f_s)$ to baseband ($f=0$) by multiplying with $cos(2 pi n f_s t)$ using a mixer, followed by an LPF. $ w_s(t) = w(t) dot sum_(n=-infinity)^infinity c_n e^(j n omega_s t) = w(t) dot sum_(n=-infinity)^infinity c_n (cos(n omega_s t) + j sin(n omega_s t)) \ = w(t)(c_0 + 2 sum_(n=1)^infinity c_n cos(n omega_s t)) $ Multiplying by $cos(k omega_s t)$ $ w_s(t)cos(k omega_s t) = w(t)(c_0 cos(k omega_s t) + 2 sum_(n=1)^infinity c_n cos(n omega_s t)cos(k omega_s t)) \ = w(t)(c_0 cos(k omega_s t) + sum_(n=1)^infinity c_n (cos((n-k)omega_s t) + cos((n+k)omega_s t))) \ = c_k w(t) space "with other components at higher frequencies removed due to LPF" $ === Instantaneous sampling (flat-top PAM) *sample and hold circuit* $ w_s(t) = sum_(k=-infinity)^infinity w(k T_s) h(t - k T_s) \ h(t) = Pi (t/tau) = cases(1"," space "for" |t| < tau/2, 0"," space "for" |t| > tau/2) \ tau <= T_s \ W_s(f) = H(f) dot f_s sum_(n=-infinity)^infinity W(f - n f_s) $ + for *natural sampling*, the individual spectral components *do have a flat frequency response*. The coefficients are only a function on $n$, $f_s$ and $tau$. + *flat-top PAM*, the individual spectral components undergo frequency depndend filtering. Complicated signal recovery. The filtering results in linear distortion. The ideal equalizer is $H^(-1)(f)$. + The bandwidth required for PAM signal transmission is much larger than needed for the baseband signal with bandwidth $B$ because of the narrow pulses for $tau/T_s << 1$. Thus, a *larger receiver bandwidth* is needed, which will pass more noise. == Time division multiplexing PAM pulses can be multiplexed in time. but the receiver has to select the correct pulses, meaning accurate synchronization is required. == Pulse Code modulation PCM consists of three basic operations: + Signal samling -> discrete-time analog pulses + Quantization of the amplitude -> discrete-tie and discrete-amplitude pulses + Coding -> digital words are assigned to the discrete-time discrete-amplitude levels. === Quantization errors during quatnization the following error is introduced: $|epsilon| <= delta / 2$ where $delta = V_(p p) / M$ is the step size or the distance between the successive quantization levels. Quantization errors result in quantization noise. The reconstructed value $Q(x_k)$ for sample $x_k$ is given by: with polar signaling $a_(k j) in {-1, +1}$ $ Q(x_k) = V sum_(j=1)^n a_(k j) (1/2)^j = delta/2 sum_(j=1)^n a_(k j) 2^(n-j) \ $ === Noise in a PCM system In a PCM communication system, the reconstructed signal $y+k = x_k + n_k$ suffers from three souces of noise: + Quantization noise: $e_q = Q(x_k) - x_k$ + Bit error noise: $e_b = y_k - Q(x_k)$ Reconstruction errors due to *detection errors* + Overload noise: The input signal of the ADC is outside the conversion range. ==== Quantization noise Quantization noise is uniformly distributed between $(-delta/2, delta/2)$. The PDF $f_(e_q) (e_q)$ is an uniform with height $1/delta$. The quantization noise power: $ bar(e^2_q) = integral_(-infinity)^infinity e_q^2 f_(e_q) (e_q) d e_q = \ integral_(-delta/2)^(delta/2) e_q^2 1/delta d e_q = delta^2 / 12 = V^2 / (3M^2) $ with $delta = (2 dot V)/ 2^n = (2 dot V) / M$. Signal-to-Noise ratio at maximum signal level *due to quantization errors*. $ (S/N)_("max") = (P_("signal-max"))/(P_("noise")) = V^2 / bar(e^2_q) = V^2 / (V^2 / (3 M^2)) = 3 M^2 $ with $M$ being the number of quantization levels $M = 2^n$. Signal-to-Noise ratio for uniformly distributed amplitudes *due to quantization errors* $ (S/N)_("uniform") = (P_("uniform"))/(P_("noise")) = (V^2/3)/(bar(e_q^2)) = (V^2 / 3)/(V^2 / (3 M^2)) = M^2 $ ==== Bit error noise The probability of a *single error* in a PCM-word with bit error prob of $P_e$ and $n$ the length of the word. $ P_(e w) = vec(n, 1) P_e (1 - P_e)^(n-1) = n P_e (1- P_e)^(n-1) approx n dot P_e $ An error in the *jth* bit results in an *error voltage* of: $ e_j = 2^(n-j) dot delta = 2^(-(j-1)) V = 2 V/ (2^j) $ The value of $bar(e^2_j)$ averaged over all $n$ bit positions gives the noise power *given* an error at a random bit location. $ bar(e_j^2) = 1/n sum_(j=1)^n (delta dot n^(n-j))^2 = 1/n sum_(k=0)^(n-1) (delta dot 2^k)^2 = delta^2 / n sum_(k=0)^(n-1) 4^k \ = delta^2 / n (4^n - 1)/(4-1) = 4/3 V^2 / n (M^2 - 1) / M^2 $ The average *bit error noise power* for bit error probability $P_e$ follows as: $ bar(e_b^2) = bar([y_k - Q(x_k)]^2) = P_(e w) bar(e_j^2) = n P_e bar(e_j^2) \ = 4/3 V^2 P_e (M^2 - 1) / M^2 $ ==== Total noise power and SNR Combining the results for the quantization noise and bit error noise, we get $ "SNR"_("pk out") = (3M^2) / (1+4 P_e (M^2 - 1)) $ if only quantized noise is considered: $ "SNR"_("pk out") = 3 M^2 = 10 dot log_10(3 M^2) space [d B] $ $ (S/N)_("uniform") = M^2 / (1 + 4(M^2 - 1)P_e) = M^2 $ The maximum possible SNR due to bit error noise is: $ "SNR" approx 3/4 1/P_e $