../

Math 128A: Numerical Analysis

Table of Contents

Lecture 1: Introduction (January 21, 2026)

Why numerical analysis?

Root finding, integrals, differential equations, etc.

Introductory Concepts

Consider a function, $G(s, x)$, where $s$ is a source position and $x$ is a spatial variable. ($x \in \mathbb{R}^2$ or $x \in \mathbb{R}^3$.) I might care about this definite integral:

\[f(x) = \int_{0}^{2\pi} G(s, x)\text{d}s\]

Suppose $s$ parameterizes a curve (a circle), $y(s) = (\cos(s), \sin(s))$.

Lecture 2

Definition (Convergence). Consider a sequence, $(x_n)$ in $\mathbb{R}$. $lim_{n \to \infty}x_n = x$ if, for every $\epsilon > 0$, there exists $r \in \mathbb{R}$ such that for all $n > r$, $\vert x_n - x\vert < \epsilon$.

Lecture 3: Convergence (January 26, 2026)

Types of Convergence

Definition (At Least Linear Convergence). $(x_n)$ converges to $x$ at least linearly if there exists $c \in \mathbb{R}$ such that, for sufficiently large $n$, $\vert x_{n+1} - x\vert \leq c\vert x_{n} - x \vert$.

Definition (Superlinear Convergence). $(x_n)$ converges to $x$ superlinearly if there exists some sequence $(\epsilon_n) \in \mathbb{R}$ such that $(\epsilon_n)$ converges to 0 and, for sufficiently large $n$, $\vert x_{n+1} - x\vert \leq \epsilon_n\vert x_{n} - x \vert$.

Definition (Quadratic Convergence). $(x_n)$ converges to $x$ at least quadratically if there exists $c \gt 0$ such that, for sufficiently large $n$, $\lvert x_{n+1} - x\rvert \leq c\lvert x_{n} - x \rvert^2$.

Definition (Convergence of Order $\alpha$). $(x_n)$ converges to $x$ at with order $\alpha$ if there exists $c \gt 0$ such that, for sufficiently large $n$, $\vert x_{n+1} - x\vert \leq c\vert x_{n} - x \vert^{\alpha}$.

Order $\alpha$ convergence is often not practical to consider. Finally, a really weak notion of convergence is sublinear convergence ($\alpha = 1$ and $c = 1$).

Definition (Sublinear Convergence). $(x_n)$ converges to $x$ at sublinearly if, for sufficiently large $n$, $\vert x_{n+1} - x\vert \leq \vert x_{n} - x \vert$.

Examples

  • $x_n = \frac{1}{2^n}$ converges linearly to 0. (Check with something similar to the ratio test!)
  • $y_n = \frac{1}{n!}$ converges at least superlinearly to 0, but not quadratically!
  • $z_n = \frac{1}{n^{2^n}}$ converges at least quadratically.

Notation

It can be said that $\frac{n}{3n^2 + 1} = \mathcal{O}(\frac{1}{n})$. Also, if $x_n = \mathcal{O}(\alpha_n)$ and $y_n = \mathcal{O}(\alpha_n)$, then $x_n + y_n = \mathcal{O}(\alpha_n)$.

Definition (Big-$\mathcal{O}$ Notation for Functions). If $f,g$ are real-valued functions over real-valued inputs, then, for $c \in \mathbb{R}$, $f(x) = \mathcal{O}(g(x))$ as $x \to c$ if $\vert f(x) \vert \leq M \vert g(x) \vert$ for all $x$ near (but not equal to) $c$.

Definition (Little-$\mathcal{o}$ Notation). $x_n = \mathcal{o}(\alpha_n)$ if there exists $(\epsilon_n) \in \mathbb{R}_{\geq0}$ such that $(\epsilon_n)$ converges to 0 and $\vert x_n \vert \leq \epsilon_n \vert \alpha_n \vert$.

As an example, $\arctan(x) = x + \mathcal{O}(x^3)$.

Lecture 4: Floating Point (January 28, 2026)

Polynomials

Horner’s method for evaluating polynomials: nest the multiplications! In other words, to evaluate a polynomial, use $p(x) = a_0 + x(a_1 + x(a_2 + \ldots))$. How many multiplications and how many additions is this?

Rounding and Chopping

Rounding and chopping are means of reducing the precision of a real-valued number. If only allowed $n$ digits, formally let $\tilde x$ be $x$ rounded to $n$ digits and let $\hat x$ be $x$ chopped to $n$ digits.

Lemma (Error of Rounding). $\vert x - \tilde x \vert < \frac{1}{2} \cdot 10^{-n}$.

Lemma (Error of Chopping). $\vert x - \hat x \vert < \cdot 10^{-n}$.

Proof. Consider $n = 0$. $x \in [x_0, x_0 + 1]$, so $\hat x$ is either $x_0$ or $x_0 + 1$. $\vert \hat x - x \vert < 1$. Also, $\vert \tilde x - x \vert \leq \frac{1}{2}$. For $n > 0$, can reduce to $n=0$ case by multiplying by 10. (More formally write this out later.)

Scientific Notation

In base-2, a number $x$ can be represented as $\pm q \cdot 2^m$, where $q \in [\frac{1}{2}, 1)$ and $m$ is an integer.

Floating Point

A computer has specific ways of representing floating points. One bit is allocated to store the sign of the number. Some bits are allocated to store the biased exponent of the number. Finally, some more bits are allocated to store the mantissa of the floating point.

For example,

s e f
1 bit 8 bits (with bias 127) 23 bits

Specifically, $m = e - 127$ and $q = 1.f$ (the mantissa always assumes a leading 1).

What are the parameters for IEEE 64-bit floating point? What is $(\frac{1}{5})_2$ stored as?

TODO.

Machine numbers are representable by the given machine. In general, if $x_-$ is the largest machine number less than $x$, and $x_+$ is the smallest machine number greater than $x$, then $\text{fl}(x)$ is the machine number that is closest.

What is $\text{fl}(\frac{1}{5})$?

$x_+$.

Definition (Machine Epsilon). The smallest $\epsilon$ such that $1 + \epsilon \neq 1$.