Difference between revisions of "ECE 280/Concept List/F21"

From PrattWiki
Jump to navigation Jump to search
(Lecture 8)
Line 145: Line 145:
 
* None of the measures above give a great sense of how similar one signal is to another because they are all influenced by the scale of each signal.  To get a dimensionless, normalized Measure of Correlation between two signals, you can calculate:<center>$$
 
* None of the measures above give a great sense of how similar one signal is to another because they are all influenced by the scale of each signal.  To get a dimensionless, normalized Measure of Correlation between two signals, you can calculate:<center>$$
 
\displaystyle \mbox{MOC}_{xy}=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}$$</center> which will be some value between 0 and 1.  A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.
 
\displaystyle \mbox{MOC}_{xy}=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}$$</center> which will be some value between 0 and 1.  A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.
 +
 +
== Lecture 9 ==
 +
* Proof that phasors really work for AC steady-state behavior given relationship between impulse response $$h(t)$$ and transfer function $$\mathbb{H}(j\omega)$$:
 +
**$$
 +
\begin{align*}
 +
\mathbb{H}(j\omega)&=\int_{-\infty}^{\infty}h(t)\,e^{-j\omega t}\,dt&
 +
h(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathbb{H}(j\omega)\,e^{j\omega t}\,d\omega
 +
\end{align*}
 +
$$
 +
** These are actually the analysis and synthesis equations for the Fourier Transform!
 +
* Step and impulse response for general first-order differential equations
 +
* Derivation of the transfer function for a first-order differential equation:
 +
**$$
 +
\begin{align*}
 +
h(t)&=\frac{1}{\tau}e^{-t/\tau}\,u(t) &
 +
\mathbb{H}(j\omega)&=\frac{\frac{1}{\tau}}{j\omega+\frac{1}{\tau}}$$
 +
\end{align*}$$
 +
* Scaled version of the above multiplying by $$\tau$$ and replacing $$1/\tau$$ with $$a$$:
 +
**$$
 +
\begin{align*}h(t)&=e^{-at}\,u(t) &
 +
\mathbb{H}(j\omega)&=\frac{1}{j\omega+a}
 +
\end{align*}$$

Revision as of 23:15, 21 September 2021

This page will be a summary of the topics covered during lectures. It is not meant to be a replacement for taking good notes!

Lecture 1

  • Class logistics and various resources on [sakai.duke.edu sakai]
  • Signals: "information about how one parameter changes depending on another parameter" - zyBooks
  • Systems: "processes that create output signals in response to input signals" paraphrased from zyBooks
  • Signal classifications
    • Continuous versus discrete
  • Analog versus digital and/or quantized
  • Periodic
    • Generally $$x(t)=x(t+kT)$$ for all integers k (i.e. $$x(t)=x(t+kT), k\in \mathbb{Z}$$). The period $$T$$ (sometimes called the fundamental period $$T_0$$) is the smallest value for which this relation is true
    • A periodic signal can be defined as an infinite sum of shifted versions of one period of the signal: $$x(t)=\sum_{n=-\infty}^{\infty}g(t-nT)$$ where $$g(t)$$ is only possibly nonzero within one particular period of the signal and 0 outside of that period.
  • Energy, power, or neither (more on this on Friday)

Lecture 2

  • More on periodic signals
    • The sum / difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
    • The period of a sum of period signals will be at most the least common multiple of the component signal periods; the actual period could be less than this period depending on interference
    • The product of two signals with periodic components will have elements at frequencies equal to the sums and differences of the frequencies in the first signal and the second signal. If the periods represented by those components are commensurable, the signal will be periodic, and again the upper bound on the period will be the least common multiple of the component periods.
  • Evan and Odd
    • Purely even signals: $$x(t)=x(-t)$$ (even powered polynomials, cos)
    • Purely odd: $$x(t)=x(-t)$$ (odd-powered polynomials, sin)
    • Even component: $$\mathcal{Ev}\{x(t)\}=x_e(t)=\frac{x(t)+x(-t)}{2}$$
    • Odd component: $$\mathcal{Od}\{x(t)\}=x_o(t)=\frac{x(t)-x(-t)}{2}$$
    • $$x_e(t)+x_o(t)=x(t)$$
    • The even and odd components of $$x(t)=e^{at}$$ end up being $$\cosh(at)$$ and $$\sinh(at)$$
    • The even and odd components of $$x(t)=e^{j\omega t}$$ end up being $$\cos(\omega t)$$ and $$\sin(\omega t)$$
  • Singularity functions - see Singularity_Functions and specifically Singularity_Functions#Accumulated_Differences
    • Unit step: $$u(t)=\begin{cases}1, t>0\\0, t<0\end{cases}$$
    • Unit ramp: $$r(t)=\int_{-\infty}^{t}u(\tau)\,d\tau=\begin{cases}t, t>0\\0, t<0\end{cases}$$
  • Signal transformations
    • $$z(t)=K\,x(\pm a(t-t_0))$$ with
    • $$K$$: vertical scaling factor
    • $$\pm a$$: time scaling (with reversal if negative); $$|a|>1$$ speeds things up and $$|a|<1$$ slows down
    • $$t_0$$: time shift
    • Get into the form above first; for example, rewrite $$3\,x\left(\frac{t}{2}+4\right)$$ as $$3\,x\left(\frac{1}{2}(t+8)\right)$$ first
    • Do flip and scalings first, then shift the flipped and scaled versions.
  • Energy and Power
    • Energy signals have a finite amount of energy: $$E_{\infty}=\int_{-\infty}^{\infty}|x(\tau)|^2\,d\tau<\infty$$
      • Examples: Bounded finite duration signals; exponential decay
    • Power signals have an infinite amount of energy but a finite average power over all time: $$P_{\infty}=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}|x(\tau)|^2\,d\tau=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{-T}^{T}|x(\tau)|^2\,d\tau<\infty$$ and $$E_{\infty}=\infty$$
      • Examples: Bounded infinite duration signals, including periodic signals
      • For periodic signals, only need one period (that is, remove the limit and use whatever period definition you want): $$P_{\infty}=\frac{1}{T}\int_{T}|x(\tau)|^2\,d\tau$$
    • If both the energy and the overall average power are infinite, the signal is neither an energy signal nor a power signal.
      • Examples: Certain unbounded signals such as $$x(t)=e^t$$
    • Vertical scaling of $$K$$ changes the energy or power by a factor of $$K^2$$
    • Time scaling of $$a$$ changes the energy or power by a factor of $$\frac{1}{a}$$
    • Neither time shift nor reversal impact energy or power, so you can shift and flip signal components to more mathematically convenient locations.

Lecture 3

  • Unit step and the $$u(t)\approx \frac{1}{2\epsilon}\left(r(t+\epsilon)-r(t-\epsilon)\right)$$ approximation
  • Unit ramp
  • Rectangular pulse function
  • Definition of the impulse function: Area of 1 at time 0; 0 elsewhere
    • Limit as $$\epsilon$$ goes to 0 of the derivative of the unit step approximation
    • Sifting property - figure out when $$\delta$$ fires off, see if that argument happens or if there are restrictions based on integral limits
  • Integrals with unit steps - figure out when integrand might be non-zero and work from there
  • See Singularity_Functions and especially Singularity_Functions#General_Simplification_of_Integrals and Singularity_Functions#Convolution_Integral_Simplification_with_Step_Function_Product_as_Part_of_Integrand
  • Exponentials and sketches
    • Solution to $$\tau\frac{dv(t)}{dt}+v(t)=v_f$$ given $$v(t_0)=v_i$$ is $$v(t)=v_f+\left(v_i-v_f\right)\exp\left(-\frac{t-t_0}{\tau}\right)$$
    • Went over how to make an accurate sketch using approximate values and slopes

Lecture 4

  • Be sure to check out Campuswire! (access code: 5183)
  • Systems will generally be represented by blocks with $$H$$ or $$G$$ in them; those represent the transfer functions of the system
  • Some common system connections:
    • Cascade: one system after another; equivalent system transfer function is the product of the individual transfer functions
    • Parallel: outputs of two system added together; equivalent system transfer function is the sum of the individual transfer functions
    • Feedback: one system after another; equivalent system transfer function is $$\frac{G}{1+GH}$$ where $$G$$ is the forward path and $$H$$ is the feedback path
  • System properties
    • Linearity (linear versus nonlinear)
      • Common nonlinearities include additive constants, non-unity powers of signals
    • Time invariance (time invariant versus time-varying)
      • Common time-varying elements include $$t$$ outside of arguments of signals, time reversals, or time scales other than 1
    • Stability (stable versus unstable)
      • Common instabilities involve inverses, integrals, some trig functions, and derivatives if you are including discontinuities
    • Memoryless (memoryless versus having memory)
      • Memoryless signals can *only* depend on "right now"; some debate about derivatives
    • Causality (causal versus non-causal)
      • Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal

Lecture 5

  • Revisited properties of linearity, time-invariance, stability, memorylessness, and causality
  • Introduced invertibility - whether for some system $$x(t)\,\longrightarrow\,y(t)$$ there exists some inverse system that would allow $$y(t)\,\longrightarrow\,x(t)$$ for all $$x(t)$$.
  • Quick review of frequency analysis using impedance and division to get a transfer function
    • Reminder of translating between time and frequency domain with $$\frac{d}{dt}\leftrightarrows j\omega$$
    • Discussion about "illegal" circuit conditions (instant voltage change across capacitor or instant current change through inductor) and "weird" circuit conditions (voltage in parallel with an inductor or current source in series with a capacitor)
    • ECE 110 use $$e^{j\omega t}$$ as the model signal for frequency analysis; we will eventually use $$e^{st}$$ where $$s=\sigma+j\omega$$
  • Introduction to LTI system analysis:
    • Define the step and impulse functions as given above
    • Define the impulse response $$h(t)$$ as the response to an impulse $$\delta(t)$$; that is, $$\delta(t)\,\longrightarrow\,h(t)$$
    • This will be mathematically very useful and physically impossible to measure, though we may be able to measure it approximately using a high-amplitude, short duration rectangular or other pulse with an area of 1.
    • Define the step response $$y_{\mbox{step}}(t)$$ as the response to an impulse $$u(t)$$; that is, $$u(t)\,\longrightarrow\,y_{\mbox{step}}(t)$$
    • This will be more likely to be physically obtainable but mathematically not quite as useful. Forutunately...
    • The step and impulse responses are related in the same ways as the step and impulse:
      $$\begin{align*} \delta(t)&=\frac{d}{dt}u(t) & u(t)&=\int_{-\infty}^t\delta(\tau)\,d\tau\\ h(t)&=\frac{d}{dt}y_{\mbox{step}}(t) & y_{\mbox{step}}(t)&=\int_{-\infty}^th(\tau)\,d\tau \end{align*}$$
    • Given those definitions, and assuming a linear-time invariant system:
      $$\begin{align*} \mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\ \mbox{Time Invariant}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\ \mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\ \mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \mbox{Sifting}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau=x(t)\,&\longrightarrow\,y(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \end{align*}$$
    • Punchline: For an LTI system with impulse response $$h(t)$$ and input signal $$x(t)$$ the output signal is given by the convolution integral:
      $$ \begin{align*} y(t)=x(t)*h(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau \end{align*}$$
and through a transformation of variables can also be given by:
$$ \begin{align*} y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau \end{align*}$$

Lecture 6

  • Recap of derivation of convolution
  • Time and Phasor based derivation of model equation for capacitor voltage of an RC circuit
    • Finding step response for an RC circuit
    • Taking derivative of step response to get impulse response of an RC circuit
  • Reminder that the response to a combination of shifted and scaled steps is a combination of shifted and scaled step responses.
  • Determination of response to a decaying exponential
  • Basic convolution properties - see Convolution Shortcuts
  • Example using shortcuts

Lecture 7

  • Graphical convolution (see ECE_280/Examples/Convolution)
  • System properties based on $$h(t)$$
    • Must be LTI since $$h(t)$$ is defined
    • Stable if $$\int_{-\infty}^{\infty}|h(\tau)|\,d\tau$$ is finite
    • Causal if $$h(t)=0$$ for all $$t<0$$
    • Invertible if there exists an $$h^{inv}(t)$$ such that $$h(t)*h^{inv}(t)=\delta(t)$$ where * is convolution.
  • Maple demonstration of Convolution - see files at Public ECE 280 Box Folder specifically in the lec07 folder

Lecture 8

Pre-script: in all of the equations below we are assuming real-valued signals; if the signals are complex, one of the terms in the integrand is generally taken as a complex conjugate.

  • Correlation Function:$$\begin{align*}r_{xy}(t)=\int_{-\infty}^{\infty}x(\tau)\,y(t+\tau)\,d\tau\end{align*}$$ -- note that some references switch the arguments in the integral which results in a mirrored version of the function
    • What kind of overlap do two signals have as you move one of the signals relative to the other?
  • Correlation: $$\begin{align*}r_{xy}(0)=\int_{-\infty}^{\infty}x(\tau)\,y(\tau)\,d\tau\end{align*}$$
    • What kind of overlap do the two signals have not accounting for any time shift? The independent variable indicates where the origin of $$x(t)$$ is relative to the origin of $$y(t)$$ in determining the area of overlap.
  • Autocorrelation Function:$$\begin{align*}r_{xx}(t)=\int_{-\infty}^{\infty}x(\tau)\,x(t+\tau)\,d\tau\end{align*}$$
    • What kind of overlap does a signal have with itself as you move it relative to itself?
  • Autocorrelation:$$\begin{align*}r_{xx}(0)=\int_{-\infty}^{\infty}x(\tau)\,x(\tau)\,d\tau\end{align*}$$
    • What kind of overlap does a signal have with itself not accounting for any time shift?
    • For real-valued signals, note that this is the same as the energy of the signal!
  • In all cases, correlation can be written as convolution using
    $$\begin{align*}r_{xy}(t)=x(-t)*y(t)\end{align*}$$
    but mathematically this leads to issues where an integrand may contain products of step functions facing the same way. One way to fix that is to find a way to write $$x(-t)$$ as a function $$x_m(t)$$ that uses right-facing steps then note that
    $$\begin{align*}r_{xy}(t)=x_m(t)*y(t)\end{align*}$$
  • None of the measures above give a great sense of how similar one signal is to another because they are all influenced by the scale of each signal. To get a dimensionless, normalized Measure of Correlation between two signals, you can calculate:
    $$ \displaystyle \mbox{MOC}_{xy}=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}$$
    which will be some value between 0 and 1. A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.

Lecture 9

  • Proof that phasors really work for AC steady-state behavior given relationship between impulse response $$h(t)$$ and transfer function $$\mathbb{H}(j\omega)$$:
    • $$ \begin{align*} \mathbb{H}(j\omega)&=\int_{-\infty}^{\infty}h(t)\,e^{-j\omega t}\,dt& h(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathbb{H}(j\omega)\,e^{j\omega t}\,d\omega \end{align*} $$
    • These are actually the analysis and synthesis equations for the Fourier Transform!
  • Step and impulse response for general first-order differential equations
  • Derivation of the transfer function for a first-order differential equation:
    • $$ \begin{align*} h(t)&=\frac{1}{\tau}e^{-t/\tau}\,u(t) & \mathbb{H}(j\omega)&=\frac{\frac{1}{\tau}}{j\omega+\frac{1}{\tau}}$$

\end{align*}$$ * Scaled version of the above multiplying by $$\tau$$ and replacing $$1/\tau$$ with $$a$$: **$$ \begin{align*}h(t)&=e^{-at}\,u(t) & \mathbb{H}(j\omega)&=\frac{1}{j\omega+a} \end{align*}$$