EGR 103/Concept List F18

From PrattWiki
Revision as of 20:05, 7 January 2019 by DukeEgr93 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page will be used to keep track of the commands and major concepts for each lab in EGR 103.

Lectures

Lecture 1 - Introduction

  • Class web page: EGR 103L; assignments, contact info, readings, etc - see slides on Errata/Notes page
  • Sakai page: Sakai 103L page; grades, surveys and tests, some assignment submissions
  • Piazza page: Piazza 103L page; message board for questions

Lecture 2 - Programs and Programming

  • To play with Python:
    • Install it on your machine or a public machine: Download
    • Use a Duke container with Spyder: Containers - note - these do not have access to your Duke files. More on how to connect that later.
  • Quick tour of Python
    • Editing window, variable explorer, and console
    • Variable explorer is your friend
  • From Dewey - programming language typically have ability to work with input, output, math, conditional execution, and repetition
  • Hilton and Bracy Seven Steps
  • Class work developing algorithm for program to determine if a number is prime

Lecture 3

  • 7 Steps for finding prime numbers
  • prime program -- includes intro to input(), if tree, for loop, print(), remainder %

Lecture 4

  • Function definitions
    • Positional and key word arguments (kwargs)
    • Default values
    • Returns tuples -- can be received by a single item or a tuple of the right size
  • Aquarium

Lecture 5

  • print() and format specifications: link
    • Main components are width, precision, and type; sometimes initial +
    • e and f can print integers as floats; d cannot print floats
  • relational and logical operators - how they work on item, string, list, tuple
  • if trees
  • while loops
  • for loops
    • NOTE: in the case of
for y in x
If the entries of x are changed, or really if x is changed in a way that its location in memory remained unchanged, y will iterate over the changed entries. If x is changed so that a copy first has to be made (for example, it is set equal to a slice of itself), then y will iterate over the original entries. Note the differences between:
x = [1, 2, 3, 4, 5]
for y in x:
    print(x, y)
    x[4] = [0]
    print(x, y)

and:

x = [1, 2, 3, 4, 5]
for y in x:
    print(x, y)
    x = x[:-1]
    print(x, y)
  • counting characters program
# letter_typing.py from class:
def check_letters(phrase):
    vowels = "aeiou"
    numbers = "0123456789"
    consonants = "bcdfghjklmnpqrstvwxyz"
    # vowels, numbers, consonants, and other in that order
    count = [0, 0, 0, 0]
    
    for letter in phrase:
        if letter.lower() in vowels:
            count[0] += 1
        elif letter.lower() in numbers: # .lower not really needed here
            count[1] += 1
        elif letter.lower() in consonants:
            count[2] += 1
        else:
            count[3] += 1
            
    return count

out = check_letters("Let's go Duke University 2018!")
print(out)
  • Question in class: does Python have ++ or -- operators; it does not. You need x += 1 or x -= 1

Lecture 6

Florence - material moved to Lecture 7 and Lab 4

Lecture 7

Lecture 8

  • Be sure to use the resources available to you, especially the Think Python links in the Class Schedule!
  • When checking for valid inputs, sometimes the order of asking the question can be important. If you want a single non-negative integer, you first need to ask if you have an integer before you ask if something is non-negative. The right way to go is something like:
if not isinstance(n, int) or n<1:

because if n is not an integer, the logic will be true without having to ask the second question. This is a good thing, because if n is, for example, a list, asking the question n<1 would yield an error.

  • We built up a program whose end goal was to compare and contrast the time it takes different methods of calculating Fibonacci numbers to run.
  • The plt.plot(x, y) command has some near cousins:
  • Fibonacci comparison program
# Fibonacci program from class
# -*- coding: utf-8 -*-
"""
Created on Fri Sep 21

@author: DukeEgr93
"""

import time
import numpy as np
import matplotlib.pyplot as plt


def fib(n):
    if not isinstance(n, int) or n < 1:
        print('Invalid input!')
        return None
    if n == 1 or n == 2:
        return 1
    else:
        return fib(n - 1) + fib(n - 2)


"""
fib_dewey based on fibonacci at:
http://greenteapress.com/thinkpython2/html/thinkpython2012.html#sec135
"""
known = {1: 1, 2: 1}


def fib_dewey(n):
    if not isinstance(n, int) or n < 1:
        print('Invalid input!')
        return None
    if n in known:
        return known[n]
    res = fib_dewey(n - 1) + fib_dewey(n - 2)
    known[n] = res
    return res


def fibloop(n):
    if not isinstance(n, int) or n < 1:
        print('Invalid input!')
        return None
    head = 1
    tail = 1
    for k in range(3, n + 1):
        head, tail = head + tail, head
    return head


N = 30
uvals = [0] * N
utimes = [0] * N

mvals = [0] * N
mtimes = [0] * N

lvals = [0] * N
ltimes = [0] * N

for k in range(1, N + 1):
    temp_time = time.clock()
    uvals[k - 1] = fib(k)
    utimes[k - 1] = time.clock() - temp_time

    temp_time = time.clock()
    mvals[k - 1] = fib_dewey(k)
    mtimes[k - 1] = time.clock() - temp_time

    temp_time = time.clock()
    lvals[k - 1] = fibloop(k)
    ltimes[k - 1] = time.clock() - temp_time

for k in range(1, N + 1):
    print('{:2d} {:6d} {:0.2e}'.format(k, uvals[k - 1], utimes[k - 1]), end='')
    print(' {:6d} {:0.2e}'.format(mvals[k - 1], mtimes[k - 1]), end='')
    print(' {:6d} {:0.2e}'.format(lvals[k - 1], ltimes[k - 1]))

k = np.arange(1, N + 1)
plt.figure(1)
plt.clf()
plt.plot(k, utimes, k, mtimes, k, ltimes)
plt.legend(['recursive', 'memoized', 'looped'])

plt.figure(2)
plt.clf()
plt.semilogy(k, utimes, k, mtimes, k, ltimes)
plt.legend(['recursive', 'memoized', 'looped'])

Lecture 9

  • Different number systems convey information in different ways.
  • "One billion dollars!" may not mean the same thing to different people: Long and Short Scales
  • Floats (specifically double precision floats) are stored with a sign bit, 52 fractional bits, and 11 exponent bits. The exponent bits form a code:
    • 0 (or 00000000000): the number is either 0 or a denormal
    • 2047 (or 11111111111): the number is either infinite or not-a-number
    • Others: the power of 2 for scientific notation is 2**(code-1023)
      • The largest number is thus just *under* 2**1024 (ends up being (2-2**-52)**1024\(\approx 1.798\times 10^{308}\).
      • The smallest normal number (full precision) is 2**(-1022)\(\approx 2.225\times 10^{-308}\).
      • The smallest denormal number (only one significant binary digit) is 2**(-1022)/2**53 or 5e-324.
    • When adding or subtracting, Python can only operate on the common significant digits - meaning the smaller number will lose precision.
    • (1+1e-16)-1=0 and (1+1e-15)-1=1.1102230246251565e-15
    • Avoid intermediate calculations that cause problems: if x=1.7e308,
      • (x+x)/x is inf
      • x/x + x/x is 2.0
  • List-building roundoff demonstration
# Roundoff Demo
import numpy as np
import matplotlib.pyplot as plt

start = 10;
delta = 0.1;
finish = 100;

k = 0
val_c = [start];
val_a = [start];

while val_c[-1]+delta <= finish:
   val_c += [val_c[-1] + delta];
   k = k + 1
   val_a += [start + k*delta];

array_c = np.array(val_c)
array_a = np.array(val_a)

diffs = [val_c[k]-val_a[k] for k in range(len(val_a))]

plt.figure(1)
plt.clf()
#plt.plot(array_c - array_a, 'k-')
plt.plot(diffs, 'k-')
  • Exponential demo program
# Exponential demo
#%%
import numpy as np
import matplotlib.pyplot as plt
#%%
def exp_calc(x, n):
    return (1 + (x/n))**n
#%% Shows that apprxomation gets better as n increases
print(np.exp(1))
print(exp_calc(1, 1))
print(exp_calc(1, 10))
print(exp_calc(1, 100))
print(exp_calc(1, 1000))
print(exp_calc(1, 10000))
print(exp_calc(1, 100000))
#%% Shows that once n is too big, problems happen
n = np.logspace(0, 18, 1e3);

plt.figure(1)
plt.clf()
plt.semilogx(n, exp_calc(1, n), 'b-')

Lecture 10

  • List comprehensions: info in Think Python and Section 7.1.1 in Punch & Enbody
    • Basic idea: make a list of [EXPRESSION for VARIABLE in ITERABLE if LOGIC]
    • Even numbers 0 through 10: [k for k in range(11) if k%2==0]
    • Non-vowels in a word: [letter for letter in word if not letter in 'aeiouAEIOU']
    • Square roots of nonnegative numbers in a list of numbers: [sqrt{n} for n in nums if n>=0]
    • Print all three digit numbers with each digit unique:
# unique_codes.py
nums = range(10)
for H in nums:
    for T in [k for k in nums if not k in [H]]:
        for O in [k for k in nums if not k in [H, T]]:
            print(H, T, O)
  • Can also replace nums with whatever should comprise three element code; nums='hello' will make all three letter words out of h, e, l, and o (24 choices)
  • Iterative solutions - where next approximation or guess is based on previous guess and some algorithm
    • Series method for finding exponential \(e^c\):
      \( e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!} \)

so

\( y_{new}=y_{old}+\frac{x^n}{n!} \)

and the initial guess for \(y\) is the \(n=0\) term, or 1.

  • Newton method for square roots - to find \(y\) where \(y=\sqrt{x}\):
    \( y_{new}=\frac{y_{old}+\frac{x}{y_{old}}}{2} \)
and a good initial guess is \(y=x\). A bad initial guess is \(y=0\)
  • Chapra Figure 4.2 (as translated in Python) can be very helpful! Now featuring 100% fewer semi-colons!
# Chapra Figure 4.2 from Applied Numerical Methods with MATLAB, 4th ed, translated into Python:
def iter_meth(x, es=0.0001, maxit=50):
    '''Maclaurin series of exponential function
    iter_meth(x,es,maxit)
    input:
    x = value at which series evaluated
    es = stopping criterion (default = 0.0001)
    maxit = maximum iterations (default = 50)
    output:
    fx = estimated value
    ea = approximate relative error (%)
    iter = number of iterations
    '''

    # initialization
    iter = 1
    sol = 1
    ea = 100
    # iterative calculation
    while 1:
        solold = sol;
        sol = sol + x ** iter / m.factorial(iter)
        iter = iter + 1
        if sol != 0:
            ea = abs((sol - solold)/sol)*100
        
        if ea<=es or iter>=maxit:
            break

    fx = sol

    return (fx, ea, iter)
  • Only two changes in 4.2 to go from exponential calculation to finding a square root
  • Change initial sol = 1 to sol = x to make x initial guess for square root
  • Change
sol = sol + x ** iter / m.factorial(iter)
line to:
 sol = sol / 2 + x / (sol * 2)

Lecture 11

  • Debugging
  • Turtle basics

Lecture 12

See EGR_103/Fall_2018/Lec_12

Lectures 13

Review

Lecture 14

Test

Lecture 15

  • Use NumPy for linear algebra - specifically, create 2-d arrays using a list of list as an input argument to np.array()
  • Dot product defines as the sum of the products of the corresponding elements in two arrays with the same number of elements
  • Element-wise operations work with corresponding elements in two same-shape array or with an array and a scaler; examples include (*, /, +, -, **, trig, etc)
  • Matrix multiplication is a collection of dot products between rows of the first array and columns of the second. Pick a row from the first and a column from the second - the resulting dot product goes in the same row as picked from the first array and the same column as picked from the second array.
  • In Python 3.5 and above, the @ operator will do matrix multiplication; the np.dot(a, b) command also does matrix multiplication if a and b are 2-D arrays. If not using numpy, need to have a triple loop.
  • See Chapter 8 in Chapra.


Lecture 16

See EGR_103/Fall_2018/Lec_16

Lecture 17

  • Statistical definitions: mean, model, estimate, St, Sr, r2
  • Coding statistics to quantify goodness of fit: Python:Fitting#Example_Code
  • Differences between "mathematically good" (high \(r^2\)) versus "scientifically good"

Lecture 18

  • Recap of stats and polyfit/polyval code
  • Reminder that mathematically best fit for a given model has smallest Sr value
  • Linear algebra proof of how to calculate coefficients for best fit - punch line is to pre-multiply linear algebra equation by transpose of functions matrix
  • Key code is to build functions matrix and use np.linalg.lstsq to calculate coefficients:
a_mat = np.block([[xv**1, xv**0]])
pvec = np.linalg.lstsq(a_mat, yv, rcond=None)[0]

Lecture 19

Lecture 20 - Zeros and Optimizations

Lecture 21 - Basic Interpolation; Numerical Derivatives and Integrals

  • Interpolations are meant to estimate values between points in a data set
  • Interpolants must go through all the data points - different concept from a fit
  • Generally have different coefficients for the interpolant between every pair of points
  • Most basic interpolation simply looks at most recent value of data (i.e. data point "to the left")
  • Next most basic is nearest neighbor - closest point
  • Next is linear interpolation - straight line between neighboring points
    • We went through two processes of coming up with coefficients using linear algebra
    • Derivatives at points can be estimated by taking derivative of straight line
    • Have to decide if it is the line before or the line after
  • Newton quadratic forms a second order equation
    • Can use to calculate derivatives at start, middle, and end
    • Can use to calculate second derivatives along curve
  • Integrals calculate the area below a curve
    • Integrals using the "to the left" interpolation use a left Riemann sum
    • Integrals using linear interpolation result in Trapezoidal rule
    • Integrals using quadratic interpolation result in Simpson's (1/3rd) rule - must have odd number of points