"Fluxion: Building a Propagator-Based Reactive Web UI Framework in Common Lisp"

22 min read By Glenn Thompson
common-lispreactive-programmingpropagatorsweb-frameworkresearch

Abstract

I am researching and developing Fluxion, a reactive web UI framework to be written entirely in Common Lisp.

The name is a nod to Isaac Newton. In 1665, Newton introduced fluxions as his way of describing instantaneous rates of change: the derivative of what he called a fluent, a quantity that varies over time. Fluxions and fluents were the vocabulary of Newton's early calculus, detailed in his treatise Method of Fluxions. A reactive UI framework is, at its core, a system for managing values that change over time and propagating those changes through a network of dependencies. The connection felt natural.

The reactive engine at the heart of Fluxion is called Lattice. If Fluxion is the framework you build applications with, Lattice is the substrate that makes it reactive. It provides the cells, the computed cells, the propagators, and the transactional glitch-prevention machinery. Lattice is what interweaves and connects everything: it is the layer that takes isolated pieces of mutable state and turns them into a coherent, self-updating graph. The name comes from Radul and Sussman's use of lattice-theoretic merge operations in their propagator model, where partial information accumulates monotonically toward a consistent result.

The idea is to render components server-side, stream HTML updates to the browser over Server-Sent Events (SSE), and have Lattice manage all the reactivity underneath. The goal is a system where application developers write zero JavaScript, where bidirectional data flow is a first-class primitive, and where the classic diamond glitch problem is eliminated through a transactional propagation model. What follows is a write-up of the research that led me here, from Elliott and Hudak's Functional Reactive Animation paper, through Radul's dissertation on propagation networks, to the design decisions I am making for Fluxion's cell graph.

Motivation

Most modern web UI frameworks are built on a reactive programming model of some kind. React has its virtual DOM diffing and unidirectional data flow. Vue and Svelte offer fine-grained reactivity with dependency tracking. Solid.js compiles reactive primitives into efficient DOM updates. All of these share a common ancestry in functional reactive programming (FRP), but they also share a common limitation: they restrict data flow to a directed acyclic graph (DAG). Information flows in one direction. If you need bidirectional relationships, like a temperature converter where editing Celsius updates Fahrenheit and editing Fahrenheit updates Celsius, you step outside the reactive model and write imperative glue code.

I wanted to explore whether this limitation is fundamental or merely conventional. The answer, it turns out, has been known for over fifteen years.

The Research Trail

Functional Reactive Animation (Elliott and Hudak, 1997)

I started with Conal Elliott and Paul Hudak's 1997 ICFP paper, Functional Reactive Animation (Fran). Elliott and Hudak introduced the idea that time-varying values (behaviours) and discrete occurrences (events) could be treated as first-class values in a functional language. Instead of writing callbacks that mutate shared state in response to user input, you compose pure functions over continuous signals.

What made Fran elegant was the abstraction. A behaviour is a function from time to a value. An event is a time-stamped occurrence. You combine them with standard functional combinators: lift, switch, until. The result is a declarative description of how your UI evolves over time, rather than an imperative script of what to do when something happens.

What struck me was not the animation domain (which was the paper's focus) but the underlying computational model. Expressing the entire state of a user interface as a composition of pure functions over time-varying values felt like the right abstraction for server-rendered web applications too. The server holds all state. The browser is a dumb terminal that receives rendered HTML. The reactive graph lives entirely on the server, and SSE provides the channel for pushing updates.

But Fran, and most FRP systems that followed, enforce one-way data flow. The dependency graph must be acyclic. This is a significant limitation for real-world UIs where bidirectional constraints are common.

Propagation Networks (Radul, 2009)

Then I found Alexey Radul's 2009 MIT PhD dissertation, Propagation Networks: A Flexible and Expressive Substrate for Computation, supervised by Gerald Jay Sussman. Radul's thesis argues that propagation networks, consisting of cells that hold values and propagators that compute relationships between cells, form a more general substrate for computation than traditional expression evaluation.

In a propagator network, information flows multidirectionally. A propagator that knows F = C * 9/5 + 32 can compute Fahrenheit from Celsius, but a second propagator knowing C = (F - 32) * 5/9 can compute Celsius from Fahrenheit. Both propagators coexist in the network. When you write to the Celsius cell, the first propagator fires, updates Fahrenheit, and the second propagator sees that its output (Celsius) already has the correct value and does nothing. The network converges.

This is a different animal from DAG-based reactive systems, where the dependency graph must be a directed acyclic graph with no cycles and no backward edges. Propagator networks permit cycles. The key insight is that propagation stops when values converge: if a propagator computes a result that is equal to the current value of its output cell, no further notifications are generated. The equality check is the termination condition.

Radul and Sussman's model also introduces the concept of partial information. Cells don't just hold values; they accumulate information through a merge operation defined on a lattice. This is where the name "Lattice" for Fluxion's reactive engine comes from. In Fluxion's design, cells will hold concrete values and use an equality test rather than a full lattice merge, but the conceptual framework, and the name, honour the lineage.

Three things about propagator networks grabbed me for the web UI use case:

  1. Bidirectional flow. Real UIs have bidirectional constraints. A temperature converter, a unit converter, a form where changing one field recomputes another, these are all naturally expressed as bidirectional propagator networks.

  2. Composability. Propagators are local. Each one knows only about its input and output cells. You can add new propagators to an existing network without modifying the existing ones. This maps well to a component model where each component sets up its own reactive relationships.

  3. Convergence. The equality check on cells provides a natural termination condition. Combined with Common Lisp's exact rational arithmetic, bidirectional numeric propagators converge perfectly with no floating-point oscillation.

Functional Reactive Programming with Propagators (Thompson, FOSDEM 2026)

The thing that finally pushed me to start building was David Thompson's talk at FOSDEM 2026, Functional reactive programming with propagators (video), presented in the Declarative and Minimalistic Computing track. Thompson is CTO at the Spritely Institute (the organisation behind Goblins and Hoot, the Scheme-to-WebAssembly compiler), and his talk demonstrated a working FRP system built on propagators in Guile Scheme, running live in the browser via WebAssembly.

Thompson opened with a useful framing: reactive programming is "a way to work with time-varying values without entering callback hell." He acknowledged the contested nature of FRP as a term, noting the many variants (continuous, discrete, push-based, pull-based, hybrid push-pull) and settling on discrete push-based FRP as the most practical model for event-loop-driven systems.

He traced the history through Elliott and Hudak's 1997 paper and then to what he described as a turning point: Evan Czaplicki's 2016 "farewell to FRP" blog post, where the creator of Elm abandoned FRP in favour of a simpler architecture. Thompson described this as "kind of like the death knell of FRP as an idea that was gathering steam." But the bug never left his brain, and he kept returning to the problem.

The core of the talk introduced propagators via Radul's dissertation and Sussman and Hanson's Software Design for Flexibility (2021, MIT Press), quoting the definition directly: propagators are "autonomous independent machines interconnected by shared cells through which they communicate." Thompson emphasised two properties: cells contain partial information that accumulates over time, and propagator networks have no inherent order of evaluation and no directionality restrictions.

That second point matters a lot. In a propagator network, you cannot do a topological sort because the graph may contain cycles. Thompson's solution, drawn from Radul's dissertation, uses an approach that resembles logical timestamps in a distributed system. Each value entering the network from the outside world is tagged with a unique identifier and a monotonically increasing logical timestamp. As values flow through the graph, each cell accumulates a set of (identifier, timestamp) pairs, forming a vector clock. When a propagator fires, it checks whether its inputs have consistent timestamps. If the timestamps for the same identifier differ across inputs (meaning one input has fresh data and another has stale data), propagation halts at that node. The propagator waits until all its inputs are consistent before computing.

Thompson defined his core data type as an "ephemeral": a record containing an arbitrary value and a set of sources (the vector clock). He also introduced the concept of a "group": related cells that share the same identifier and timestamp counter. In his colour picker demo, the R, G, and B cells form a group. Changing any one of them increments the shared logical time, so propagators downstream know that all three values belong to the same generation.

The demo was a live colour picker running in the browser as a Scheme program compiled to WebAssembly via Hoot. RGB sliders on one side, HSV sliders on the other, with bidirectional propagators converting between the two colour spaces. Adjusting any slider on either side immediately updated the other side, with no glitches, no infinite loops, and no imperative glue code. The circularity (RGB → HSV → RGB) was handled entirely by the propagator network.

Thompson was candid about the trade-offs: his approach involves some redundant computation because a propagator may fire and discover it cannot proceed (stale inputs), whereas topological sort guarantees each node is ready when visited. But the total redundant computation is minimal in practice, and the benefit is a much simpler substrate that supports cyclic dependencies natively.

The talk confirmed that the propagator approach was viable for FRP and gave me the confidence to commit to this architecture for Fluxion. Thompson's implementation in Guile Scheme showed the model working in a Lisp context, and I wanted to bring the same ideas to Common Lisp with a focus on server-rendered web UI.

Vector Clocks vs. Transactional Propagation

Thompson's glitch-prevention mechanism and the one I am designing for Fluxion are actually different, even though both derive from Radul's propagator model. I think the distinction is interesting enough to spell out.

Thompson uses logical timestamps (vector clocks). Propagation halts when a propagator detects inconsistent timestamps across its inputs. There is no global ordering imposed on the graph. The system is fundamentally unordered, and consistency emerges from the timestamp checks. This is elegant and maps naturally to the propagator model's philosophy of autonomous, independent machines.

Fluxion will use a transactional model. The idea is that all cell mutations within a transaction are deferred: downstream dependents are not notified immediately but collected into a queue. When the transaction commits, the queue is flushed in a controlled order so that every cell sees a consistent snapshot of its dependencies. Within that flush, one strategy is to sort by topological height (base cells at 0, derived cells at max(dep heights) + 1), but the key guarantee comes from the transaction boundary itself. You never see intermediate state because nothing propagates until the transaction completes.

Both approaches achieve glitch-free propagation, but they make different trade-offs. The vector clock approach is more general and does not require the graph to be structured in levels. The transactional approach is simpler to implement in a single-threaded-per-transaction context and integrates naturally with Common Lisp's lock-based concurrency model. For a server-side web framework where each transaction would run under a global lock, the transactional approach feels like the right fit.

Fluxion's Architecture

At a high level, the design looks like this:

  • CLOS components. UI components will be ordinary CLOS classes with render and handle-action methods.
  • Server-side state. All application logic and state live on the server.
  • HTML over the wire. The server sends rendered HTML fragments, not JSON.
  • SSE patches. Server-Sent Events carry patch instructions (morph, append, remove) as JSON payloads.
  • DOM morphing. The client diffs new HTML against the live DOM and only updates what changed, preserving input focus, cursor position, and scroll state.
  • Parenscript client. The browser runtime will be written in Parenscript and compiled to a single small JS file. Application developers write zero JavaScript.

The server owns the state. The browser follows instructions.

Lattice: The Reactive Engine

The reactive layer is called Lattice. The design has three building blocks: cells, computed cells, and propagators.

Cells

A cell holds a single value and notifies watchers when it changes. The intended API looks like this:

(let ((count (fluxion.cells:make-cell 0 :name "count")))
  (fluxion.cells:watch count
    (lambda (new old)
      (format t "changed: ~A -> ~A~%" old new)))
  (setf (fluxion.cells:cell-value count) 1))
;; prints: changed: 0 -> 1

Each cell would carry an equality test function (defaulting to equal). When you set a cell's value, the test is applied to the old and new values. If they are equal, no notification is generated. This is the convergence mechanism that makes cyclic propagator networks terminate.

Computed Cells

Computed cells derive their value from other cells. The plan is to track dependencies automatically via a dynamic variable (*tracking-reads*). When a computed cell's thunk reads another cell, that read would be recorded and watchers wired up automatically:

(let* ((count (fluxion.cells:make-cell 5))
       (label (fluxion.cells:make-computed
               (lambda ()
                 (format nil "Count is ~D"
                   (fluxion.cells:cell-value count))))))
  (fluxion.cells:cell-value label))
;; => "Count is 5"

Computed cells would also track their height in the dependency graph. A base cell has height 0. A computed cell's height is max(dependency heights) + 1. This height would be recalculated every time the computed cell recomputes, keeping the topology accurate even as dependencies change dynamically.

Propagators

Propagators connect input cells to output cells through a function. They support bidirectional networks. Here is how the temperature converter example would look:

(let ((celsius    (fluxion.cells:make-cell 0))
      (fahrenheit (fluxion.cells:make-cell 32)))
  (fluxion.cells:make-propagator
   :inputs (list celsius)
   :fn (lambda (c) (+ (* c 9/5) 32))
   :outputs (list fahrenheit))
  (fluxion.cells:make-propagator
   :inputs (list fahrenheit)
   :fn (lambda (f) (* (- f 32) 5/9))
   :outputs (list celsius))
  (setf (fluxion.cells:cell-value celsius) 100)
  (fluxion.cells:cell-value fahrenheit))
;; => 212

Two propagators form a cycle: Celsius drives Fahrenheit, and Fahrenheit drives Celsius. If you set Celsius to 100, the first propagator fires and sets Fahrenheit to 212. The second propagator then fires and computes (100 - 32) * 5/9... wait, that's wrong. It computes (212 - 32) * 5/9 = 100, which is exactly the current value of the Celsius cell. The equality check sees no change. Propagation stops. No infinite loop.

Common Lisp's exact rational arithmetic is critical here. The conversion (+ (* c 9/5) 32) and its inverse (* (- f 32) 5/9) use rationals, not floating-point. The round-trip is mathematically exact. In languages with only floating-point arithmetic, bidirectional numeric propagators can oscillate due to rounding errors. In Common Lisp, they converge perfectly.

Each propagator would also have a re-entrance guard (active-p flag). If a propagator is already firing and a notification arrives (from its own output feeding back through the network), the notification is ignored. This provides a second layer of protection against infinite loops, on top of the equality-based convergence.

Solving the Diamond Glitch Problem

What Is a Glitch?

The diamond glitch is a well-known problem in reactive programming. Consider four cells in a diamond dependency:

    A
   / \
  B   C
   \ /
    D

Cell A is a source. B and C both depend on A. D depends on both B and C. When A changes, the runtime must recompute B, C, and D. But in what order?

If the runtime recomputes B first, then immediately notifies D, D will recompute with the new value of B but the stale value of C. This intermediate state, where D sees B_new and C_old, is a glitch. It represents a state that should never have existed. D then recomputes again when C catches up, producing the correct value. But the glitch was observable: watchers on D saw an inconsistent intermediate value, and any side effects triggered by that value (like sending an HTML patch to the browser) cannot be undone.

In a web UI framework, glitches mean sending incorrect HTML to the client, which flickers and then corrects itself. This is unacceptable.

How Fluxion Will Eliminate Glitches

Lattice's approach is to use a transactional model to guarantee glitch-free propagation. The transaction boundary is the core mechanism. Height-based ordering within the flush is one strategy for deciding the order in which deferred cells are processed, but what matters is that nothing propagates until the transaction completes.

1. Transactions defer notifications.

A with-transaction macro will establish a transaction context. Within a transaction, when a cell's value changes, downstream dependents are not notified immediately. Instead, they are enqueued into a priority queue. Each cell is enqueued at most once per transaction (deduplication via a hash set).

2. The queue is flushed when the transaction commits.

When the outermost with-transaction form completes, the queue is flushed in a controlled order. One approach is to sort by cell height (base cells at 0, derived cells at max(dep heights) + 1), so that all cells at a given level are up-to-date before any cell at a higher level recomputes.

In the diamond example:

  • A changes. B and C (both at height 1) are enqueued.
  • The transaction commits. B and C are both at height 1, so they are processed before D (height 2).
  • B recomputes. D is enqueued (but not processed yet).
  • C recomputes. D is enqueued again (but deduplication means it stays in the queue once).
  • Now all height-1 cells are done. D (height 2) is processed. It sees the new values of both B and C. No glitch.
  • D recomputes exactly once, with consistent inputs.

3. Transactions nest.

Inner transactions would be absorbed by the outermost transaction. No flush happens until the outer transaction completes. This means you can compose functions that use transactions internally without worrying about premature flushes.

4. Propagators use transactions internally.

The fire-propagator function would wrap its output writes in with-transaction. This means propagator output would always be glitch-free, even without the application developer explicitly using transactions.

Here is a sketch of how with-transaction should work:

(defmacro with-transaction (&body body)
  (let ((outer (gensym "OUTER-")))
    `(let ((,outer *transaction*))
       (if ,outer
           (progn ,@body)
           (with-cell-lock
             (let ((*transaction* (make-tx)))
               (prog1 (progn ,@body)
                 (tx-flush *transaction*))))))))

And a sketch of the flush logic:

(defun tx-flush (tx)
  (loop while (tx-queue tx) do
    (let ((sorted (sort (tx-queue tx) #'< :key #'cell-height)))
      (setf (tx-queue tx) nil)
      (clrhash (tx-seen tx))
      (dolist (cell sorted)
        (typecase cell
          (computed-cell (recompute cell))
          (t (notify-watchers cell
                              (slot-value cell 'value)
                              (slot-value cell 'value))))))))

The outer loop (loop while (tx-queue tx)) would handle the case where recomputing a cell enqueues further cells. This can happen when the dependency graph is deeper than two levels. Each iteration sorts and processes the current queue, and any newly enqueued cells get picked up in the next iteration.

Why Not a Push-Based or Pull-Based Model?

Traditional FRP systems use either push-based (eager) or pull-based (lazy) evaluation. Push-based systems propagate changes immediately, which causes glitches in diamond dependencies. Pull-based systems defer computation until a value is demanded, which avoids glitches but makes it difficult to trigger side effects (like sending SSE patches) at the right time.

Lattice's transaction model is neither purely push nor purely pull. I think of it as deferred push: changes are pushed through the graph, but the push is deferred and ordered by the transaction. You get the efficiency of push-based propagation (no unnecessary recomputation) with the consistency guarantees of a pull-based system (no glitches).

The Intended Round-Trip: From Cell to Browser

To show how all of this would fit together, here is what should happen when a user types a value into the Celsius field in the temperature converter example:

  1. The browser fires a data-on-input event. The Parenscript client runtime POSTs to /action/converter/set-celsius with the new value.

  2. The server's action dispatch finds the session, retrieves the converter component, and calls handle-action with action :set-celsius.

  3. The action handler parses the value and writes it to the Celsius cell: (setf (fluxion.cells:cell-value celsius) v).

  4. The cell detects the value has changed. Within a transaction, it enqueues downstream dependents. (The C-to-F propagator's watcher is one of them.)

  5. The propagator fires within the transaction. It reads the Celsius value, computes Fahrenheit, and writes to the Fahrenheit cell. The F-to-C propagator sees that its computed output matches the current Celsius value, no change, propagation stops.

  6. The component's cell watchers trigger a re-render. The component produces new HTML.

  7. The new HTML is compared against the cached last render (dirty tracking). If it differs, a patch event is collected.

  8. The patch event is sent back in the HTTP response (for action-triggered updates) or pushed over the persistent SSE connection (for server-initiated updates).

  9. The browser's morph algorithm diffs the new HTML against the live DOM, updating only what changed. The user's cursor position in the input field is preserved.

The whole round-trip, from keystroke to visual update, would involve zero JavaScript written by the application developer. The reactive graph, the rendering, the diffing, and the state management all happen in Common Lisp.

Thread Safety

The cell graph will be protected by a global recursive lock. All reads, writes, watcher registration, and transaction flushes would be serialized. This means you could safely write to cells from background threads (timers, monitors, server push loops) while the main thread processes HTTP requests.

(defvar *cell-lock*
  (bordeaux-threads:make-recursive-lock "fluxion-cell-lock"))

(defmacro with-cell-lock (&body body)
  `(bordeaux-threads:with-recursive-lock-held (*cell-lock*)
     ,@body))

The lock would be recursive so that watchers triggered during a write can safely read other cells. Transactions would hold the lock for their entire scope, meaning the flush is atomic: no other thread can observe intermediate states during a transaction flush.

For typical web application concurrency (tens to low hundreds of simultaneous sessions), a single global lock should be sufficient. The critical sections are short: a cell write, an equality check, and a queue insertion. The actual HTML rendering would happen outside the lock.

Why Common Lisp?

I keep getting asked "why Common Lisp for this?" so here is the honest answer. A few properties of the language line up unusually well with this problem:

  • Exact rational arithmetic. Bidirectional numeric propagators converge perfectly. (* 100 9/5) is exactly 180, not 179.99999999999997. This one matters more than you might think.
  • CLOS. The Common Lisp Object System gives you multiple dispatch, method combinations (:after, :before, :around), and metaclass protocols. Components would be CLOS classes. Actions would be specialised methods. A defcomponent macro could generate classes, cell setup, accessor functions, and render methods from a single form.
  • Macros. with-transaction, defaction, defcomponent would all be macros that generate the appropriate boilerplate. The macro system lets you provide a high-level DSL without sacrificing runtime performance.
  • Conditions and restarts. CL's condition system is more expressive than exceptions. Fluxion could use this for graceful error recovery in session reaping, action dispatch, and SSE connection management.
  • Parenscript. The browser runtime would be written in Parenscript, a Lisp-to-JavaScript compiler. The entire client would be authored in Lisp, compiled to a single JS file, and served as a static asset. No npm, no bundler, no build step beyond a single (build-client) call.
  • Spinneret. Server-side HTML generation would use Spinneret, which provides a Lisp DSL for HTML that integrates naturally with the rest of the codebase.
  • Image-based development. You can redefine components, actions, and even the reactive graph while the server is running. No restart. The development feedback loop would be very tight.

Related Work

  • Phoenix LiveView (Elixir). Server-rendered, WebSocket-based. Similar philosophy of server-owned state and HTML-over-the-wire, but uses a channel model rather than propagators for reactivity.
  • Hotwire/Turbo (Rails). HTML-over-the-wire with Turbo Streams. Focused on progressive enhancement rather than fine-grained reactivity.
  • HTMX. Extends HTML with attributes for AJAX, SSE, and WebSocket. Fluxion's planned data-* attribute approach would be similar in spirit, but the server-side reactive graph is the key differentiator.
  • Cells (Kenny Tilton). A Common Lisp reactive library that predates most modern FRP systems. Cells uses a dataflow model with lazy evaluation and integrity constraints. Lattice draws on some of the same ideas but follows the Radul/Sussman propagator model more directly.
  • Spritely Goblins (Christine Lemmer-Webber). An object-capability actor model in Guile Scheme. Different paradigm, but shares the Lisp aesthetic and the commitment to principled computation models.

Where Things Stand

Fluxion is in the early stages of development. The research is done, the design is taking shape, and I am working through the implementation. There is a lot left to figure out, and the code is not ready for anyone else to use yet. But I wanted to write this up now because the ideas are worth sharing even before the software is finished.

This project is a deliberate bet on ideas from computer science research rather than the conventions of mainstream web development. The propagator model is not new. Radul published it in 2009. Elliott and Hudak published Fran in 1997. But these ideas have not made it into web frameworks, and I think they should have by now.

I will publish the source at github.com/parenworks/Fluxion when it is ready.

References

  1. Elliott, C. and Hudak, P. (1997). Functional Reactive Animation. International Conference on Functional Programming (ICFP). ACM.
  2. Radul, A.A. (2009). Propagation Networks: A Flexible and Expressive Substrate for Computation. PhD dissertation, MIT. Supervised by Gerald Jay Sussman.
  3. Thompson, D. (2026). Functional reactive programming with propagators. FOSDEM 2026. Video.
  4. Radul, A.A. and Sussman, G.J. (2009). The Art of the Propagator. MIT CSAIL Technical Report.
  5. Hanson, C. and Sussman, G.J. (2021). Software Design for Flexibility. MIT Press.
  6. Czaplicki, E. (2016). A Farewell to FRP. elm-lang.org.
  7. Tilton, K. (2004). Cells: A Dataflow Extension to CLOS. International Lisp Conference.