ECE 280/Concept List/F21

From PrattWiki
Jump to navigation Jump to search

This page will be a summary of the topics covered during lectures. It is not meant to be a replacement for taking good notes!

Lecture 1 - 8/23

  • Class logistics and various resources on [sakai.duke.edu sakai]
  • Signals: "information about how one parameter changes depending on another parameter" - zyBooks
  • Systems: "processes that create output signals in response to input signals" paraphrased from zyBooks
  • Signal classifications
    • Continuous versus discrete
  • Analog versus digital and/or quantized
  • Periodic
    • Generally $$x(t)=x(t+kT)$$ for all integers k (i.e. $$x(t)=x(t+kT), k\in \mathbb{Z}$$). The period $$T$$ (sometimes called the fundamental period $$T_0$$) is the smallest value for which this relation is true
    • A periodic signal can be defined as an infinite sum of shifted versions of one period of the signal: $$x(t)=\sum_{n=-\infty}^{\infty}g(t-nT)$$ where $$g(t)$$ is only possibly nonzero within one particular period of the signal and 0 outside of that period.
  • Energy, power, or neither (more on this on Friday)

Lecture 2 - 8/27

  • More on periodic signals
    • The sum / difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
    • The period of a sum of period signals will be at most the least common multiple of the component signal periods; the actual period could be less than this period depending on interference
    • The product of two signals with periodic components will have elements at frequencies equal to the sums and differences of the frequencies in the first signal and the second signal. If the periods represented by those components are commensurable, the signal will be periodic, and again the upper bound on the period will be the least common multiple of the component periods.
  • Evan and Odd
    • Purely even signals: $$x(t)=x(-t)$$ (even powered polynomials, cos)
    • Purely odd: $$x(t)=x(-t)$$ (odd-powered polynomials, sin)
    • Even component: $$\mathcal{Ev}\{x(t)\}=x_e(t)=\frac{x(t)+x(-t)}{2}$$
    • Odd component: $$\mathcal{Od}\{x(t)\}=x_o(t)=\frac{x(t)-x(-t)}{2}$$
    • $$x_e(t)+x_o(t)=x(t)$$
    • The even and odd components of $$x(t)=e^{at}$$ end up being $$\cosh(at)$$ and $$\sinh(at)$$
    • The even and odd components of $$x(t)=e^{j\omega t}$$ end up being $$\cos(\omega t)$$ and $$\sin(\omega t)$$
  • Singularity functions - see Singularity_Functions and specifically Singularity_Functions#Accumulated_Differences
    • Unit step: $$u(t)=\begin{cases}1, t>0\\0, t<0\end{cases}$$
    • Unit ramp: $$r(t)=\int_{-\infty}^{t}u(\tau)\,d\tau=\begin{cases}t, t>0\\0, t<0\end{cases}$$
  • Signal transformations
    • $$z(t)=K\,x(\pm a(t-t_0))$$ with
    • $$K$$: vertical scaling factor
    • $$\pm a$$: time scaling (with reversal if negative); $$|a|>1$$ speeds things up and $$|a|<1$$ slows down
    • $$t_0$$: time shift
    • Get into the form above first; for example, rewrite $$3\,x\left(\frac{t}{2}+4\right)$$ as $$3\,x\left(\frac{1}{2}(t+8)\right)$$ first
    • Do flip and scalings first, then shift the flipped and scaled versions.
  • Energy and Power
    • Energy signals have a finite amount of energy: $$E_{\infty}=\int_{-\infty}^{\infty}|x(\tau)|^2\,d\tau<\infty$$
      • Examples: Bounded finite duration signals; exponential decay
    • Power signals have an infinite amount of energy but a finite average power over all time: $$P_{\infty}=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}|x(\tau)|^2\,d\tau=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{-T}^{T}|x(\tau)|^2\,d\tau<\infty$$ and $$E_{\infty}=\infty$$
      • Examples: Bounded infinite duration signals, including periodic signals
      • For periodic signals, only need one period (that is, remove the limit and use whatever period definition you want): $$P_{\infty}=\frac{1}{T}\int_{T}|x(\tau)|^2\,d\tau$$
    • If both the energy and the overall average power are infinite, the signal is neither an energy signal nor a power signal.
      • Examples: Certain unbounded signals such as $$x(t)=e^t$$
    • Vertical scaling of $$K$$ changes the energy or power by a factor of $$K^2$$
    • Time scaling of $$a$$ changes the energy or power by a factor of $$\frac{1}{a}$$
    • Neither time shift nor reversal impact energy or power, so you can shift and flip signal components to more mathematically convenient locations.

Lecture 3 - 8/30

  • Unit step and the $$u(t)\approx \frac{1}{2\epsilon}\left(r(t+\epsilon)-r(t-\epsilon)\right)$$ approximation
  • Unit ramp
  • Rectangular pulse function
  • Definition of the impulse function: Area of 1 at time 0; 0 elsewhere
    • Limit as $$\epsilon$$ goes to 0 of the derivative of the unit step approximation
    • Sifting property - figure out when $$\delta$$ fires off, see if that argument happens or if there are restrictions based on integral limits
  • Integrals with unit steps - figure out when integrand might be non-zero and work from there
  • See Singularity_Functions and especially Singularity_Functions#General_Simplification_of_Integrals and Singularity_Functions#Convolution_Integral_Simplification_with_Step_Function_Product_as_Part_of_Integrand
  • Exponentials and sketches
    • Solution to $$\tau\frac{dv(t)}{dt}+v(t)=v_f$$ given $$v(t_0)=v_i$$ is $$v(t)=v_f+\left(v_i-v_f\right)\exp\left(-\frac{t-t_0}{\tau}\right)$$
    • Went over how to make an accurate sketch using approximate values and slopes

Lecture 4 - 9/3

  • Be sure to check out Campuswire! (access code: 5183)
  • Systems will generally be represented by blocks with $$H$$ or $$G$$ in them; those represent the transfer functions of the system
  • Some common system connections:
    • Cascade: one system after another; equivalent system transfer function is the product of the individual transfer functions
    • Parallel: outputs of two system added together; equivalent system transfer function is the sum of the individual transfer functions
    • Feedback: one system after another; equivalent system transfer function is $$\frac{G}{1+GH}$$ where $$G$$ is the forward path and $$H$$ is the feedback path
  • System properties
    • Linearity (linear versus nonlinear)
      • Common nonlinearities include additive constants, non-unity powers of signals
    • Time invariance (time invariant versus time-varying)
      • Common time-varying elements include $$t$$ outside of arguments of signals, time reversals, or time scales other than 1
    • Stability (stable versus unstable)
      • Common instabilities involve inverses, integrals, some trig functions, and derivatives if you are including discontinuities
    • Memoryless (memoryless versus having memory)
      • Memoryless signals can *only* depend on "right now"; some debate about derivatives
    • Causality (causal versus non-causal)
      • Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal

Lecture 5 - 9/6

  • Revisited properties of linearity, time-invariance, stability, memorylessness, and causality
  • Introduced invertibility - whether for some system $$x(t)\,\longrightarrow\,y(t)$$ there exists some inverse system that would allow $$y(t)\,\longrightarrow\,x(t)$$ for all $$x(t)$$.
  • Quick review of frequency analysis using impedance and division to get a transfer function
    • Reminder of translating between time and frequency domain with $$\frac{d}{dt}\leftrightarrows j\omega$$
    • Discussion about "illegal" circuit conditions (instant voltage change across capacitor or instant current change through inductor) and "weird" circuit conditions (voltage in parallel with an inductor or current source in series with a capacitor)
    • ECE 110 use $$e^{j\omega t}$$ as the model signal for frequency analysis; we will eventually use $$e^{st}$$ where $$s=\sigma+j\omega$$
  • Introduction to LTI system analysis:
    • Define the step and impulse functions as given above
    • Define the impulse response $$h(t)$$ as the response to an impulse $$\delta(t)$$; that is, $$\delta(t)\,\longrightarrow\,h(t)$$
    • This will be mathematically very useful and physically impossible to measure, though we may be able to measure it approximately using a high-amplitude, short duration rectangular or other pulse with an area of 1.
    • Define the step response $$y_{\mbox{step}}(t)$$ as the response to an impulse $$u(t)$$; that is, $$u(t)\,\longrightarrow\,y_{\mbox{step}}(t)$$
    • This will be more likely to be physically obtainable but mathematically not quite as useful. Forutunately...
    • The step and impulse responses are related in the same ways as the step and impulse:
      $$\begin{align*} \delta(t)&=\frac{d}{dt}u(t) & u(t)&=\int_{-\infty}^t\delta(\tau)\,d\tau\\ h(t)&=\frac{d}{dt}y_{\mbox{step}}(t) & y_{\mbox{step}}(t)&=\int_{-\infty}^th(\tau)\,d\tau \end{align*}$$
    • Given those definitions, and assuming a linear-time invariant system:
      $$\begin{align*} \mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\ \mbox{Time Invariant}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\ \mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\ \mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \mbox{Sifting}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau=x(t)\,&\longrightarrow\,y(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \end{align*}$$
    • Punchline: For an LTI system with impulse response $$h(t)$$ and input signal $$x(t)$$ the output signal is given by the convolution integral:
      $$ \begin{align*} y(t)=x(t)*h(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau \end{align*}$$
and through a transformation of variables can also be given by:
$$ \begin{align*} y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau \end{align*}$$

Lecture 6 - 9/10

  • Recap of derivation of convolution
  • Time and Phasor based derivation of model equation for capacitor voltage of an RC circuit
    • Finding step response for an RC circuit
    • Taking derivative of step response to get impulse response of an RC circuit
  • Reminder that the response to a combination of shifted and scaled steps is a combination of shifted and scaled step responses.
  • Determination of response to a decaying exponential
  • Basic convolution properties - see Convolution Shortcuts
  • Example using shortcuts

Lecture 7 - 9/13

  • Graphical convolution (see ECE_280/Examples/Convolution)
  • System properties based on $$h(t)$$
    • Must be LTI since $$h(t)$$ is defined
    • Stable if $$\int_{-\infty}^{\infty}|h(\tau)|\,d\tau$$ is finite
    • Causal if $$h(t)=0$$ for all $$t<0$$
    • Invertible if there exists an $$h^{inv}(t)$$ such that $$h(t)*h^{inv}(t)=\delta(t)$$ where * is convolution.
  • Maple demonstration of Convolution - see files at Public ECE 280 Box Folder specifically in the lec07 folder

Lecture 8 - 9/17

Pre-script: in all of the equations below we are assuming real-valued signals; if the signals are complex, one of the terms in the integrand is generally taken as a complex conjugate.

  • Correlation Function:$$\begin{align*}r_{xy}(t)=\int_{-\infty}^{\infty}x(\tau)\,y(t+\tau)\,d\tau\end{align*}$$ -- note that some references switch the arguments in the integral which results in a mirrored version of the function
    • What kind of overlap do two signals have as you move one of the signals relative to the other?
  • Correlation: $$\begin{align*}r_{xy}(0)=\int_{-\infty}^{\infty}x(\tau)\,y(\tau)\,d\tau\end{align*}$$
    • What kind of overlap do the two signals have not accounting for any time shift? The independent variable indicates where the origin of $$x(t)$$ is relative to the origin of $$y(t)$$ in determining the area of overlap.
  • Autocorrelation Function:$$\begin{align*}r_{xx}(t)=\int_{-\infty}^{\infty}x(\tau)\,x(t+\tau)\,d\tau\end{align*}$$
    • What kind of overlap does a signal have with itself as you move it relative to itself?
  • Autocorrelation:$$\begin{align*}r_{xx}(0)=\int_{-\infty}^{\infty}x(\tau)\,x(\tau)\,d\tau\end{align*}$$
    • What kind of overlap does a signal have with itself not accounting for any time shift?
    • For real-valued signals, note that this is the same as the energy of the signal!
  • In all cases, correlation can be written as convolution using
    $$\begin{align*}r_{xy}(t)=x(-t)*y(t)\end{align*}$$
    but mathematically this leads to issues where an integrand may contain products of step functions facing the same way. One way to fix that is to find a way to write $$x(-t)$$ as a function $$x_m(t)$$ that uses right-facing steps then note that
    $$\begin{align*}r_{xy}(t)=x_m(t)*y(t)\end{align*}$$
  • None of the measures above give a great sense of how similar one signal is to another because they are all influenced by the scale of each signal. To get a dimensionless, normalized Measure of Correlation between two signals, you can calculate:
    $$ \displaystyle \mbox{MOC}_{xy}=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}$$
    which will be some value between 0 and 1. A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.

Lecture 9 - 9/20

  • Proof that phasors really work for AC steady-state behavior given relationship between impulse response $$h(t)$$ and transfer function $$\mathbb{H}(j\omega)$$:
    • $$ \begin{align*} \mathbb{H}(j\omega)&=\int_{-\infty}^{\infty}h(t)\,e^{-j\omega t}\,dt& h(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathbb{H}(j\omega)\,e^{j\omega t}\,d\omega \end{align*} $$
    • These are actually the analysis and synthesis equations for the Fourier Transform!
  • Step and impulse response for general first-order differential equations
  • Derivation of the transfer function for a first-order differential equation:
    • $$ \begin{align*} h(t)&=\frac{1}{\tau}e^{-t/\tau}\,u(t) & \mathbb{H}(j\omega)&=\frac{\frac{1}{\tau}}{j\omega+\frac{1}{\tau}} \end{align*}$$
  • Scaled version of the above multiplying by $$\tau$$ and replacing $$1/\tau$$ with $$a$$:
    • $$ \begin{align*}h(t)&=e^{-at}\,u(t) & \mathbb{H}(j\omega)&=\frac{1}{j\omega+a} \end{align*}$$
  • Characteristic polynomial for general second-order differential equation:
    • $$ \begin{align*} \frac{d^2y(t)}{dt^2}+2\zeta \omega_n\frac{dy(t)}{dt}+\omega_n^2y(t)&=f(t)\\ s^2+2\zeta\omega_n s+\omega_n^2&=0\\ s&=-\zeta\omega_n\pm\omega_n\sqrt{\zeta^2-1} \end{align*} $$
      where $$\zeta$$ is the damping ratio and $$\omega_n$$ is the natural frequency.
    • Depending on value of $$\zeta$$, system can be:
      • Undamped: $$\zeta=0$$, $$s$$=purely imaginary complex conjugates, homogeneous response is sinusoidal
      • Unerdamped: $$0<\zeta<1$$, $$s$$=complex conjugates with real parts, homogeneous response is exponential sinusoid
      • Critically damped: $$\zeta=1$$, $$s$$=repeated real roots, homogeneous response is polynomial exponential
      • Overdamped: $$\zeta>1$$, $$s$$=two different purely real roots, homogeneous response is exponential
      • If $$\zeta$$ and $$\omega_n$$ are both positive, real part is negative meaning exponential decay

Lecture 10 - 9/24

  • Fourier Series representation can be used on signals that are periodic, bounded, have a finite number of local minima and maxima in a given period, and have a finite number of discontinuities in a given period.
  • Main formulas:
\( \begin{align*} x(t)&=\sum_{k=-\infty}^{\infty}\mathbb{X}[k]\,e^{jk\omega_0t} & \mathbb{X}[k]&=\frac{1}{T}\int_Tx(t)\,e^{-jk\omega_0t}\,dt \end{align*} \)
  • For periodic signals consisting of pure sinusoids,
\( \begin{align*} x(t)&=A\,\cos(p\omega_0t)+B\,\sin(q\omega_0t) & \mathbb{X}[k]&=\begin{cases} k=p & \frac{A}{2}\\ k=q & \frac{B}{j2}\\ k=-q & -\frac{B}{j2}\\ k=-p & \frac{A}{2} \end{cases} \end{align*} \)
  • Even signals have purely real Fourier Series coefficients; odd signals have purely imaginary Fourier Series coefficients
  • For real signals, $$\mathbb{X}[-k]=\mathbb{X}^*[k]$$
  • For signals with a finite number of non-zero Fourier Series coefficients, synthesis can be done by noting the real part translates to a cosine at twice that amplitude and the imaginary part translates to a sine at negative twice the amplitude:
\( \begin{align*} \mathbb{X}[k]&=\begin{cases} k=5 & -2 \\ k=4 & j3 \\ k=2 & 4-j5\\ k=-2 & 4+j5\\ k=-4 & -j3\\ k=-5 & -2 \end{cases} & x(t)&=-4\cos(5\omega_0t)-6\sin(4\omega_0t)+8\cos(2\omega_0t)+10\sin(2\omega_0t) \end{align*} \)
  • $$\mbox{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$$ which means $$\mbox{sinc}(0)=1$$ and $$\mbox{sinc}(n)=0$$ for all integers $$n$$ other than 0.
  • The Fourier Series coefficients for a centered rectangular pulse with height $$A$$, width $$W$$, and period $$T$$ are given by $$\mathbb{X}[k]=A\frac{W}{T}\mbox{sinc}\left(n\frac{W}{T}\right)$$
  • See the public Box folder for ECE 280 and especially the MATLAB files in the lec10 folder for programs that build a signal and then use Fourier Series in concert with phasor analysis to build a filtered signal. More on that in the next lecture!

Lecture 11 - 9/27

  • Recap of that behind LTI system and system responses
  • Basic filter concepts
    • Transfer functions represent magnitude ratios and phase differences at different frequencies
    • Real filters categorized by what band / bands of frequencies has / have magnitudes within$$1/\sqrt{2}$$ of maximum. Frequencies where the magnitudes are exactly $$1/\sqrt{2}$$ of the maximum are known as half-power frequencies
    • Main types of filter include low-pass, band-pass, high-pass, and band-reject
    • Idea filters have a constant magnitude in the band, 0 magnitude outside the band, and 0 phase shift
    • Real filters have magnitudes and phases that are functions of $$\omega$$
    • The output from real and idea filters can look quite different!
  • Basic Fourier Series properties:
    • Time shift: if $$x(t)\leftrightarrow \mathbb{X}[k]$$ then $$x(t-t_0)\leftrightarrow \mathbb{X}[k]e^{-jk\omega_0t_0}$$
    • Frequency shift: if $$x(t)\leftrightarrow \mathbb{X}[k]$$ then $$e^{jk_0\omega_0t}x(t-t_0)\leftrightarrow \mathbb{X}[k-k_0]$$

Lecture 12 - 10/1

  • Vertically shifting a signal only changes the $$\mathbb{X}[0]$$ term; that is:
\( \begin{array}{cc} x(t) & \mathbb{X}[k]\\ z(t)=x(t)+M & \mathbb{Z}[k]=\begin{cases}k\neq 0, \mathbb{X}[k]\\k=0, \mathbb{X}[0]+M\end{cases} \end{array} \)
  • The frequency shift property is generally only useful for real signals if you have two shifts such that the complex exponentials in time combine to make a real signal. For example, a centered rectangular pulse train $$x(t)$$ of height $$A$$, width $$W$$, and period $$T$$ multiplied by a cosine with period $$T$$:
\( \begin{align*} \mathbb{X}[k]&=A\frac{W}{T}\mbox{sinc}\left(k\frac{W}{T}\right)\\ z(t)&=x(t)\,\cos\left(\frac{2\pi}{T}t\right)\\ z(t)&=x(t)\,\left(\frac{e^{j2\pi t/T}+e^{-j2\pi t/T}}{2}\right)\\ z(t)&=x(t)\,\left(\frac{e^{j\omega_0 t}+e^{-j\omega_0 t}}{2}\right)\\ \mathbb{Z}[k]&=\frac{1}{2}\mathbb{X}[k-k_0]+\frac{1}{2}\mathbb{X}[k+k_0] \end{align*} \)
  • Given the time shift property and knowing the Fourier Series coefficients for a centered rectangular pulse train of height $$A$$, width $$W$$, and period $$T$$ of $$\mathbb{X}[k]=A\frac{W}{T}\mbox{sinc}\left(n\frac{W}{T}\right)$$, you can find the Fourier Series coefficients for any periodic signal made up of a collection of constants by decomposing it into several scaled and shifted pulse trains and adding the representations together. There may be several ways to decompose a particular train!
  • The average power in a periodic signal can be found using either the time domain or frequency domain using Parseval's Theorem:
\( \begin{align*} P_{avg}&=\frac{1}{T}\int_T|x(t)|^2\,dt=\sum_{k=-\infty}^{\infty}|\mathbb{X}[k]|^2 \end{align*} \)
  • For periodic signals that are not pieecwise constants, you may have to do the integral in the analysis equation.
    • The $$\mathbb{X}[0]$$ value may need to be found independently if the integral is singular at $$k=0$$; that $$\mathbb{X}[0]$$ value is always the value of the signal.
    • The result of the integration can often be simplified by recognizing how certain terms reach to a constant $$k$$ value. For example, anything that looks like $$e^{jk\omega_0T}$$ can be reduced to 1 when you realize that $$\omega=\frac{2\pi}{T}$$ meaning $$\omega_0T=2\pi$$ and $$e^{jk2\pi}=1$$ if $$k$$ is an integer. Similarly, $$e^{jk\omega_0T/2}=(-1)^k$$
    • In class - calculated and simplified a sawtooth and and decaying exponential train

Lecture 13 - 10/8

Test review

Lecture 14 - 10/11

Test 1

Lecture 15 - 10/15

  • Recap of Fourier Series Synthesis and Analysis equations
  • Introduction to Fourier Transform Synthesis and Analysis equations:
    \( \begin{align*} x(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(j\omega)\,e^{j\omega t}\,d\omega\\ X(j\omega)&=\int_{-\infty}^{\infty}x(t)\,e^{-j\omega t}\,dt \end{align*}\)
  • Fourier transform of a rectangular pulse
  • Fourier transform of a decaying exponential
  • Solving zero-initial-condition differential equation with exponential forcing function
  • Brute-force and cover-up method for finding partial fraction expansion coefficients
  • Fourier transform of complex exponential
    • Application to Fourier transform of a periodic signal
  • Proof that $$y(t)=x(t)*h(t)\leftrightarrow Y(j\omega)=X(j\omega)\cdot H(j\omega)$$

Lecture 16 - 10/18

  • Fourier transform of a single rectangular pulse
  • Transformations between sin and sinc
    \(\begin{align*}\mbox{sinc}(x)&=\frac{\sin(\pi x)}{\pi x}\\\sin(\theta)&=\theta\,\mbox{sinc}\left(\frac{\theta}{\pi}\right)\end{align*}\)
  • Fourier transform of a periodic signal comes from the Fourier series of the signal:
    \(\begin{align*}e^{j\omega_x t}&\leftrightarrow 2\pi\delta(\omega-\omega_x)\\ x(t)=\sum_{k}X[k]\,e^{jk\omega_0t}&\leftrightarrow X(j\omega)=\sum_{k}X[k]\left(2\pi\delta(\omega-k\omega_0)\right)\end{align*}\)
  • Time shift and frequency shift properties
    \(\begin{align*} x(t-t_0)&\leftrightarrow e^{-j\omega t_0}X(j\omega)\\ e^{j\omega_0t}x(t)&\leftrightarrow X(j(\omega-\omega_0))\end{align*}\)
  • Convolution and multiplaction:
    \(\begin{align*} z(t)=x(t)*h(t)&\leftrightarrow Z(j\omega)=X(j\omega)\,H(j\omega)\\ z(t)=x(t)\,y(t)&\leftrightarrow Z(j\omega)=\frac{1}{2\pi}\left(X(j\omega)*Y(j\omega)\right)\end{align*}\)
  • Fourier transform for underdamped second-order systems (The MOAT):
    \(\begin{align*} e^{-at}\left(A\cos(\omega_xt)+B\sin(\omega_xt)\right)&\leftrightarrow \frac{A(j\omega+a)+B(\omega_x)}{(j\omega+a)^2+(\omega_x)^2}\end{align*}\)
  • Partial fraction expansion - check for overdamped versus underdamped (or critically damped)
  • Derivative and integral properties
    \(\begin{align*} \frac{dx(t)}{dt}&\leftrightarrow j\omega X(j\omega)\\ \int_{-\infty}^{t}x(\tau)~d\tau&\leftrightarrow \frac{1}{j\omega}X(j\omega)+\pi X(j0)d\omega\end{align*}\)
  • Step function:
    \(\begin{align*} u(t)&\leftrightarrow \frac{1}{j\omega}+\pi d\omega\end{align*}\)
  • For the integral property - note that if a series of singularity functions (steps, ramps, etc) for a signal that is non-zero over a finite duration, the impulse parts will all cancel out so you only need to write the first part (inverse powers of $$j\omega$$).

Lecture 17 - 10/22

  • Frequency derivative property and results:
    \(\begin{align*} t\,x(t)&\leftrightarrow j\frac{dX(j\omega)}{d\omega}\\ te^{-at}u(t)&\leftrightarrow \frac{1}{\left(j\omega+a\right)^2}\\ t^ne^{-at}u(t)&\leftrightarrow \frac{n!}{\left(j\omega+a\right)^{n+1}}\end{align*}\)
  • Inverse transforms of critically damped systems
  • Inverse transforms if numerator order and denominator order are the same
  • Sampling and Reconstruction
    • Signal may be discretized vertically and sampled horizontally
    • If signal is not sampled fast enough, reconstruction may be at a lower-than-actual frequency
    • Mathematically, can loo at a sampled signal as a signal multiplied by an impulse train
    • Can get back original with a low pass filter IF the sampling frequency is at least twice the maximum frequency of the original signal and if the filter is configured correctly.
    • zyBook Section 6.13 explains all this really well!

Lecture 18 - 10/25

  • Impulse sampling
  • Nyquist criterion for recoverability
    • Aliasing issue if sampling frequency is too low
  • Low pass filtering to recover central sampling
    • Ideal (brickwall) sampling versus real-time sampling with rolloff
  • Amplitude modulation
    • Synchronous demodulation with multiplication by carrier frequency followed by LPF
    • May include BPF to isolate channel
    • Very sensitive to any difference in frequency or phase between modulation and demodulation signals

Lecture 19 - 10/29

  • AM = double sideband modulation; includes a copy of the carrier signal
  • Adds an offset to the message to make sure the message is always positive - this allows for asynchronous demodulation with a band pass filter followed by an envelope detector; resistors and capacitor values are chosen to follow signal

Lecture 20 - 11/1

  • Recap of zyBooks pages on sampling and amplitude modulation

Lecture 21 - 11/5

  • Recap of LTI systems and definitions of impulse and step responses
  • Recap of input/output relationship of a generic exponential and definition of Laplace Transform and Inverse Laplace Transform
  • Laplace Transforms of impulses and steps, the latter leading to the necessity of a Region of Convergence for the integrals
  • Laplace transform for exponentials, trig functions, exponential trig functions
  • Inverse Laplace Transforms based on Laplace Transform and ROC

Lecture 22 - 11/8

  • System characteristics based on $$H(s)$$ and ROC
    • Stable if $$\sigma=0$$ is included in ROC
    • Causal if ROC is everything or the right-half-plane AND there are no time shifts to negative $$t$$ values
  • Signal characteristics based on ROC:
    • The ROC is bounded by poles
    • If ROC is everything, signal is of limited duration
    • If ROC is left or right-sided, the signal is left or right-sided
    • If the ROC is bounded, the signal is two-sided
  • Laplace Proerties
    • Time shift and frequency shift
    • Time derivative and frequency derivative
  • Laplace transform of semi-periodic signals

Lecture 23 - 11/12

  • Poles and zeros may be impacted by time shifts
  • For time shifts, do inverse Laplace first then put in the time shift
  • For partial fractions, numerator must be lower order than denominator
  • For invertible systems there exists some $$h^{inv}(t)$$ such that $$h(t)*h^{inv}(t)=\delta(t)$$ where $$*$$ represents convolution here
  • You can get a differential equation that models a system if you know $$h(t)$$ by getting $$H(s)$$ and using the time derivative property
  • For Laplace transforms, if there are some time shifts you need to figure out how to get all the times shifted the same amount - this may bring in some extra terms
  • The roots of $$(s+a)^2+(\omega)^2=0$$ are $$s=-a\pm j\omega$$
  • The Laplace Transform we have been looking at is the Bilateral Laplace Transform - it is useful when we know everything about the system for all time
    • Next lecture we will cover the Unilateral Laplace Transform which needs to know a value at some time and then everything about the system after that.

Lecture 24 - 11/15

Test 2

Lecture 25 - 11/19

  • Unilateral Laplace Transforms
    • Properties different from BLT include time shift, time reversal, and derivative
    • Derivative property allows for solving differential equations with non-zero initial conditions
  • Mechanical Analogies
  • Initial and Final Value Theorems


Lecture 26 - 11/22

  • Office Hours

Lecture 27 - 11/30

  • Element models for use in switched circuits.
  • Linearization about a fixed point.
  • Moose.
  • Wolves.

Lecture 28 - 12/2

  • Root-locus overview.
  • Review.