White Paper 03
Date: 2026-01-09
Status: Complete draft (self-contained)
Curve Memory (CM) is a representation and update principle in which information is stored not primarily as discrete symbols or isolated state vectors, but as the geometry of a trajectory: tangents, curvature, torsion, and higher derivatives of a path evolving through time or through a latent space. CM is “memory as shape,” enabling robust reconstruction, smoothing, and compression of path-dependent information.
Curve Memory Alphabet (CMA) is an application-layer encoding scheme that maps discrete tokens (letters/words/commands) into base curve primitives that are then contextually deformed by previous/following tokens and by an internal memory state. The result is an expressive, continuous “script” where meaning is carried both by the symbol identity and by the geometry of its deformation chain.
This paper provides a rigorous mathematical formulation for CM and CMA, proposes practical parameterizations for implementation (splines, Fourier series, control points), defines update rules (EMA and reinforce/decay laws), shows how CM naturally produces compression transforms (difference/derivative coding), and gives falsifiable predictions.
Many systems fail because they store only points while the world is governed by paths:
Curve Memory addresses this by making the curve itself (and its differential geometry) the primary data structure.
Let $\gamma:[0,1]\to\mathbb{R}^n$ be a curve (a path in state space).
Define the curve-memory feature bundle up to order $k$:
\[\boxed{\ M_k(s) := \big(\gamma(s),\gamma'(s),\gamma''(s),\ldots,\gamma^{(k)}(s)\big).\ }\]Interpretations:
A key point is that shape information is distributed: it is not localized to a single coordinate but is encoded by derivatives along the curve.
If $\gamma$ is sufficiently smooth and parameterized by arc length $s$, curvature magnitude is
\[\boxed{\ \kappa(s) = \|\gamma''(s)\|.\ }\]In 3D, torsion can be defined (when $\gamma’,\gamma’’,\gamma’’’$ are well-behaved) by
\[\boxed{\ \tau(s) = \frac{\det(\gamma'(s),\gamma''(s),\gamma'''(s))}{\|\gamma'(s)\times\gamma''(s)\|^2}.\ }\]CM does not require torsion, but it is a natural extension when 3D shape memory matters.
A practical CM system needs a finite-dimensional state.
Represent each segment of $\gamma$ as a cubic spline with control points $P_0,\ldots,P_m$.
Represent a periodic or windowed curve by a truncated Fourier series:
\[\gamma(s) \approx a_0 + \sum_{k=1}^K \big(a_k\cos(2\pi k s)+b_k\sin(2\pi k s)\big).\]CM then stores $(a_k,b_k)$; deformation corresponds to coefficient updates.
Store derivatives at anchor points:
\[J_i := \big(\gamma(s_i),\gamma'(s_i),\gamma''(s_i)\big).\]This representation is convenient for “remember the tangent/curvature at landmarks” behavior.
For sampled points $x_0,\ldots,x_N$:
This is the bridge between CM and compression transforms.
CM becomes powerful when it is not just a static representation but a dynamical state that updates under stimuli.
Let $F_t$ be a feature extractor producing a curve feature vector (control points, Fourier coefficients, or jets). Then
\[\boxed{\ M_{t+1} = \lambda M_t + (1-\lambda)F_t,\quad \lambda\in[0,1).\ }\]This yields smooth tracking with a clear “memory length” controlled by $\lambda$.
A continuous-time alternative mirrors reinforce/decay dynamics:
\[\boxed{\ \frac{dM}{dt} = \alpha\,S(t) - \mu\,M(t).\ }\]This makes CM a general-purpose “geometric memory with time constants.”
To encode the idea that “sharp bends resist change,” introduce a stiffness that increases with curvature:
\[\frac{d\gamma}{dt} = -\nabla_\gamma \mathcal{E}(\gamma) + \text{inputs},\]with energy
\[\boxed{\ \mathcal{E}(\gamma) = \int_0^1 \big( w_1\|\gamma'(s)\|^2 + w_2\|\gamma''(s)\|^2 \big)\,ds.\ }\]This gives a precise variational backbone for CM: memory is a low-energy deformation path.
A system is Markovian if the next state depends only on the current state. CM is typically non-Markovian because the geometry of the curve reflects accumulated history.
One way to formalize this: define an internal curve state $M_t$ that is updated by EMA or reinforce/decay. The output at time $t$ depends on $M_t$, so the influence of old inputs persists through the geometric state.
A recurring empirical principle is:
This makes CM naturally compatible with compression: you allocate bits where curvature is high.
CMA maps discrete symbols into a continuous script.
Let the alphabet of tokens be $\Sigma$ (letters, words, commands).
Assign each token $a\in\Sigma$ a base curve primitive:
\[\boxed{\ B(a):[0,1]\to\mathbb{R}^n.\ }\]This base primitive is the “glyph skeleton” (e.g., loops, strokes, peaks).
Let the context at position $t$ be $c_t$ (previous tokens, next tokens, and/or a memory state $M_t$). Define a deformation operator
\[\boxed{\ D(\cdot\,;c_t): \{\text{curves}\}\to\{\text{curves}\}.\ }\]Then the realized glyph for token $a_t$ is
\[\boxed{\ \Gamma_t = D\big(B(a_t); c_t\big).\ }\]The full CMA message is the concatenation (with smoothing constraints) of $\Gamma_t$.
Concatenate glyph curves by enforcing continuity constraints at boundaries:
A practical join rule uses a short transition spline that matches endpoints and tangents.
Represent each glyph by control points $P\in\mathbb{R}^{m\times n}$. Let the context vector be $u(c_t)\in\mathbb{R}^d$.
Define
\[P_t = P^{(0)}(a_t) + A\,u(c_t)\]where $A\in\mathbb{R}^{(m\cdot n)\times d}$ is a learned or designed matrix.
This is the simplest “context bends the glyph” mechanism.
CMA can be used purely artistically, but it can also be made decodable.
Given an observed curve segment $\hat\Gamma$, decode by:
A common distance is elastic shape matching (e.g., curvature profile distance):
\[d(\hat\Gamma,\Gamma) = \int_0^1 \|\kappa_{\hat\Gamma}(s)-\kappa_{\Gamma}(s)\|^2\,ds.\]Pick the token minimizing distance.
Because deformation depends on context, a decoder can model token sequences:
This turns CMA into a legitimate communications channel.
The CM lens suggests that many signals become compressible after transforming them into “low-curvature” coordinates.
For bytes $b_0,\ldots,b_{N-1}$ (interpreted as points on a 1D curve), define
\[d_0=b_0,\quad d_t=(b_t-b_{t-1})\bmod 256.\]If the underlying data has local smoothness, the deltas concentrate near zero and compress well under entropy coding.
Second difference
\[\Delta^2 b_t = b_t - 2b_{t-1} + b_{t-2}\]corresponds to a discrete curvature proxy. Many structured sequences have sparse second differences, which can be exploited.
For numeric sequences, fit short segments by low-degree polynomials or splines and encode residuals. The “curve memory” state stores segment coefficients; residuals are typically small.
This is a principled version of: “store the curve, not the points.”
CM supports:
CM provides a natural state for:
CM can be used to store:
CMA can be used as: