John M. Myers^{Ia} and HadiMadjid^{IIb}

960 Waltham St. Apt. 168, Lexington, MA 02421^{a},

309 Winthrop Terrace, Bedford MA 01730^{b}.

jmartmyers@gmail.com^{I}, gailmadjid@comcast.net^{II}

(June 3, 2024)

###### Abstract

Although quantum states nicely express interference effects, outcomes of experimental trials show no states directly; they indicate properties of probability distributions for outcomes. Twenty years ago we proved categorically that probability distributions leave open a choice of quantum states and operators and particles, resolvable only by a move beyond logic, which, inspired or not, can be characterized as a guess. Guesses link the inner lives of investigators to their explanations that objectively accord with measured data. Recognizing the inescapability of guesswork in physics leads to avenues of investigation, one of which is presented here.

We invert the quest for logical foundations of physics to display a physicalfoundation of logic and calculation, and we represent this foundationmathematically, in such a way as to show the shaping and re-shaping ofcalculation by guesswork. We draw on the interplay between guessing andcomputation as it works in digital contexts that include living organisms.Digital computation and communication depend on a type of synchronization thatcoordinates transitions among physically distinct conditions that express“digits.” This logical synchronization, known to engineers but neglectedin physics, requires guesswork for its maintenance. By abstracting digitalhardware, we model the structure of human thinking as logically synchronizedcomputation, punctuated by guesses.

We adapt marked graphs to mathematically represent computation, and we represent guesses by unpredictable changes in the marked graphs. The marked graphsreveal a logical substructure to spatial and temporal navigation, withimplications across physics and its applications to biology. By limiting ourmodel to the logical aspect of communications and computations—leaving outenergy, weight, shape, etc.—we display logical structure in relation toguesswork, applicable not just in electronics but also to the functioning ofliving organisms.

Keywords: guesswork, physical foundations of logic, synchronization, marked graphs, logical distance, basal cognition.

## 1 Introduction

One could always suppose that guesswork is essential to physics, but a proof issomething else. Almost twenty years ago we proved that guesswork is necessaryto bridge a logical gap between evidence and any quantum explanation of thatevidence. Acknowledging guesswork leads physics away from a quest for finalanswers to a form of dialogue that expects and accepts surprises. Broad consequences include the following.

- 1.
Choosing an explanation is an imaginative act, subject to the need for revision, so there are no “final answers.”

- 2.
Because of their dependence on guesswork, scientific explanations, although testable, have someting of the imaginative character of metaphors.

- 3.
The inner lives of physicists—unavailable to exterior, “objective”view—participate in and influence the world that physicists measure with theirclocks.

- 4.
The source of any guess on which logical deductions depend is logically inexplicable.

^{1}^{1}1André Malraux, Minister ofCulture in France from 1958 to 1969 interviewed leading artists about theirsources of inspiration. Their answers led Malraux to speak of their inspirationas coming from contact with what he called “the unknowable”[9, pp.98, 159]

The recognition of guesswork as indispensible invites theoretical attention toexperimental surprises. Here we direct our attention to surprises as reflectedin the hardware of digital computation and communication.

Digital hardware exemplifies what we see as the physical foundation of logic and computation. As one does in theoretical physics, we want to represent this foundation mathematically. The very possibility of logic and mathematics is founded on the lumpiness found in matter, from pebbles as the root of the word calculus to the amino acids strung like beads of a necklas in molecules of DNA. This lumpiness permits the “digital” in digital hardware.

At the hardware level, digital computation and communication are inseparable;one has digital networks. Like spoken language, the hardware of digital networksdepends on properly timed transitions among physically distinct conditions.Below we show how timing in digital networks depends on a special local type ofsynchronization, distinct from that defined in special relativity. This logical synchronization—known to engineers but overlooked in theoreticalphysics—requires guesswork for its maintenance.

Analyzing the physical foundations of computation and the need for guesswork inlogical synchronization has implications for several disciplines and for novellinks among them.

- 1.
In physics, we show how communications work as a topologicalfoundation of space and time, involving guesswork in the maintenance ofnecessary logical synchronization.

- 2.
In biology, avoiding any discussion of consciousness,we model basal cognition along with human thinking as computation, punctuated by unpredictable reprogramming.

- 3.
In discrete mathematics we define marked graphs that are adapted to express the communications and computations for the above points. These graphs allow the definition of logical distance as a precursor to physical time and distance.

### 1.1 Background

Pursuing a vision experienced by HM in 1985, we provedin 2005 that evidence, while it constrains explanations, necessarily leaves openan infinite range of conflicting explanations [2]. Thus, choosing anexplanation requires an extra-logical act. I (JMM) call such an act a guess. What do I mean by “guess”? As an individual scientist, when I’mpuzzled, diverse thoughts flit into my mind. Some are fleeting, others I pick upand use as starting points for doing something, and these I call guesses. Forme, guessing contrasts with calculating. The logical gap between evidence andits explanations, calling for guesswork to link them, reveals anunpredictability far more drastic than quantum uncertainty, and beyond anyfuzziness due to limited sample sizes.

Although it has important consequences, the proof has been largely overlooked.For one thing, verifying the proof requires knowledge omitted in introductorycourses on quantum mechanics. In these, one learns how a quantum state and ameasurement operator on a vector space imply certain probabilities of outcomes,but not about what is called an inverse problem, the problem of using quantumtheory to explain measured data. The inverse problem is important whensomething surprising is found, as in the Stern–Gerlach experiment that led tohalf-integral spin [3].

In a limited way, the inverse problem is addressed in quantum decision theory[4, 5] and in quantum information theory[6]. However, one usually assumes a finite dimensionalvector space, and that the measurement operators are known (this leads towhat is called quantum tomography), but both the operators and thedimension are invisible to experiment, so the assumption is logicallyindefensible. Thus, solving an inverse problem requires determining aquantum state and the measurement operators from the probabilities ofoutcomes abstracted from experiments, without assuming a dimensionof the vector space. We proved that this problem cannot have a uniquesolution, and indeed that infinitely many conflicting explanations fit whateverevidence is at hand, so that choosing among them is impossible without a guess. Inan application, we produced a second explanation of probabilities involved in aquantum-cryptography protocol, revealing an unsuspected security vulnerability[7, 8].

### 1.2 Obstacle of global time

Appreciating logical synchronization is impeded by the widespread tacit assumption of “universal time,” as if the production oftime and space coordinates for scientific and other purposes were irrelevant totheoretical physics.^{2}^{2}2“An important task of the theoretical physicistlies in distinguishing between trivial and nontrivial discrepancies betweentheory and experiment” [1, p.3]. Setting aside this assumption allows attention to synchronization in digitalcommunications, which is essential to time distribution, to computation, and indeed toall logical operations.

To begin with, we notice how national and international time broadcasts dependon digital communication networks that construct time from local clockswhose rates must be adjusted. These adjustments of the clocks that maketime depend on guesswork. For example, I set my clock by my phone; my phonetells the time, but how? It gets the time from the Internet, but where does theInternet get it? In the US, it gets it from Boulder, Colorado, where theNational Institute of Standards and Technology (NIST) has clocks. While my clocktells the time, the clocks at NIST make time for the United States. NISTtransmits readings of its clocks over the Internet, so that my phone can telltime. Why do I say “clocks” at NIST and not just “clock”? Like anymachines, the clocks at NIST need maintenance, and sometimes break and arereplaced, so NIST must use several clocks. The clocks at NIST do not agree exactlywith each other. NIST adjusts the clock rates to limit their deviations from eachother, and these adjustments require guesswork. NIST and other nationalmetrology institutes participate in global networks of clocks, some on the groundothers on orbiting satellites, all undergoing rate adjustments, tuned byguesswork [10].

The fact is that time as we now generate it is dependentupon defined origins, a defined resonance in the cesium atom, interrogatingelectronics, induced biases, time scale algorithms, and random perturbationsfrom the ideal. Hence, at a significant level, time—as man generates it by thebest means available to him—is an artifact. Corollaries to this are that everyclock disagrees with every other clock essentially always, and no clock keepsideal or “true” time in an abstract sense except as we may choose to defineit. [11]

### 1.3 Inseparability of computation and communication

The digital communication and computation that pervade today’s science depend on:(1) physically distinct conditions to distinguish between 0 and 1, (2)transitions between these conditions, (3) local timing ordered by local clocks thatare synchronized differently from the usual way in physics, and (4) guesswork.How these work together in computer networks is one of the main topics of this report.

Computations involve the communication of inputs and outputs over networks,including the Internet. Going the other way, digital communication requiresmessage processing, e.g. copying, addressing, searching, coding, and decoding, allof which are instances of computation. We elevate this back-and-forthdependence to the following principle of wide application.

###### Proposition 1.

Computation and communication are inseparable.

We will speak of neither separately, but instead bundle them together, speakingof computational networks. These are found not only in electroniccomputers but across all life. In section2 we define computationalnetworks embedded in cyclic environments that produce inputs and consumeoutputs, as in process control. Computational networks express two levels ofunpredictability: (1) the inputs supplied by an environment are viewed asunpredictable, and (2) unforeseen happenings change one computational networkinto another.

Computational networks perform logical operations that work in coordinatedrhythms. Without communication-guided adjustments, the rhythms of any twological operations drift, as do any two clocks, so that phase deviationsbetween them increase indefinitely. To limit phase deviations, computationalnetworks depend on a type of synchronization that differs from that whichEinstein made famous in special relativity. This logical synchronizationmaintains the necessary phase relations in the rhythms of logical operations. Topicture logical synchronization, think of playing a game of catch, tossing aball back and forth with another person: you cycle through phases of tossingand catching, and if you are looking the wrong way you don’t catch the ball.Coordinating the phase “ready to catch” with the arrival of the tossedball exemplifies logical synchronization. As tangible examples ofcomputational networks, we introduce “token games” in which you participateby moving tokens over a game board.

Although some contributions to phase deviations can be predicted, forothers the statistical properties are non-stationary andunpredictable. To maintain logical synchronization, rate adjustments guided byguesswork are necessary.^{3}^{3}3For a pendulum clock, one adjusts the tick rateby changing the length of the pendulum by a small amount. Typically, a screw at thebottom of the pendulum is turned to raise or lower its center ofgravity slightly. Phase deviations in logical synchronization limitthe rate of information processing. The tighter the logically synchronization,the smaller the phase deviations, and so the more information can flow.Controlling drift in clocks is analogous to steering a car in a gustycrosswind: better control depends on better guesses.

We discuss only the logical aspect of computational networks; i.e., we saynothing about energy, weight, shape, visual or auditory output, etc. We discussthe physical mechanisms of electronic computation only to show their “lumpy”character, leaving open whether the lumps are small or large and whether they arelumps in frequencies of oscillations or in spatial extent or some othermodality. Because it is focused exclusively on the logical aspect, our formulationof computational networks and their logical synchronization has applications to notonly man-made systems but also biology, including human thought.Rejecting the assumption that “physical” means “entirely explicable,” werepresent human thought as computing interspersed with unforeseeable reprogrammings.

In section3, we represent the logic of computational networks by means of markedgraphs. The concept of logical distance is introduced and shown to dependon not only the graph but also its family of markings, of which there maybe more than one. Unpredictable changes in computation are expressed by unpredictable changes from one marked graph to another.

As discussed in section4, logical synchronization and its dependence onextra-logical adjustment point to an alternative concept of time and space. Insection4.2, we show how measurements of computational networks that attemptto determine a temporal order of events sometimes lead to nonsense. Analternative “time” based on logical synchronization, rather than that definedin special relativity, carries forward the identification of “time” withcommunication that was begun by Einstein in 1905.

Section5 pulls together the main results relevant to biology, andsection6 reflects on the main points. For example, we see biological evolution as mirroring the human scientific approach: processes involving computation, testing, and punctuated by analogies of guesses. That discussion also draws together some points that appeared as fragments in previous sections.

## 2 Physics of computational networks

We want a concept of computing that is applicable to a person with paper and pencil, to adigital computer, or to a biochemical process in a living organism. For this, weextract some features from computer hardware—which unlike softwarehas a simple form—to be described as a computational network. In termsof hardware, a computational network consists of interconnected logic gates,linked to an unpredictable environment.^{4}^{4}41) We are interested in finitenetworks of gates, rather than in Turing computability.2) Restrictions on networks to ensure proper functioning, includingrepeatability, will be stated below. Each gate repeatedly performs one oranother logical operation. Partitioning logical operations intosequences executed by logic gates imparts stability needed formemory; for example, memory is often composed of cross-connected NAND gates that comprise flip-flops. As mentioned in section1, we allow for widelydifferent gate mechanisms, such as those that presumably occur in living organisms.

One can cartoon a computational network as a model railroad, with enginescarrying messages over tracks from station to station. First, picture acircular track with several stations around it and a single engine. When the engine arrives, a station modifies the message carried by the engine, puts themodified message back on the engine, and sends the engine to the next station. Nowpicture a second circuit traversed by a second engine, with the two circuits sharing astation; the first engine that arrives at that station waits for the other engineto arrive, then the messages from both engines are processed together, and thestation generates a message for each engine to be conveyed to subsequentstations on the two circuits.

As a cartoon for a computational network, the stations are logic gates, thetracks connect the gates, and the engines carrying messages become tokens carryinglabels, with gates performing logical operations on token labels. At the finest levelof description each token label is 0 or 1. Gates performing logical operationson 0s and 1s can be connected in a network that adds and multipliesnumbers. Indeed, connected logical operations do all the computing that digitalcomputers and computer networks do. We claim that this logical structure isfound in not only computer hardware, but also in all living organisms. Below, wediscuss the “ball-tossing” aspect of logical synchronization necessary for thelogical operations of computational networks.

The physical mechanisms of the logic gates that perform logical operations have a peculiar property: whether electronic or biochemical, they depend on physically distinct conditions and on transitions among these distinct conditions. What is peculiar is that distinct is undefinable in any way that is objective in the sense of being observer-independent. “Distinct” can mean distinct to some organism but not to others. In the transitions among distinct conditions, logical synchronization is necessary to ensure that gates operate on distinct conditions, rather than on an indistinct condition that must occur during transitions. Crucial to the capacity of gates for distinguishing among conditions are tolerances of deviations, so that tolerated irregularities do not cause errors.For example, in many digital computers, 0 is conveyed by a lower voltage while 1 is conveyed by a higher voltage, and to preclude errors due to unavoidablevoltage deviations, the high and low voltages have tolerated ranges separated by a “dead band” [12].

### 2.1 Computing by logical operations in contact with unpredictable environments

The gates of a computational network physically perform logicaloperations associated with the mathematical logical connectivessuch as and, not, nor, and nand, where nand is thenegative of the and connective. Nand is defined by the table ofarguments and values in fig.1. We emphasize the distinctionbetween the physical performance of logical operations, which involvesmotion, and mathematical logical connectives which can be written intables that sit still on a page. To distinguish the operations from theconnectives, we capitalize the operations as AND, NOT, NOR, NAND, etc.

Because all the logical operations can be defined as compounds of NAND,that operation plays a major role in what follows.^{5}^{5}5How the AND, NOT, and NOR gates of computational networks are made from NANDgates (together with FORKs) is shown in fig.21 in AppendixA.3.Each input of any logicaloperation can be either of two distinct physical conditions, and so can theoutput. These conditions are represented by 0 and 1, as indicated infig.1.

### 2.2 Gates and networks

While the gates mentioned so far deal only with 0’s and 1’s, we also use theterm gate to indicate a mechanism that performs complex logical functions,as occurs in coarsened descriptions of logical operations (see AppendixA.2). We deliberately leave unspecified the various physicalmechanisms that can serve as gates, and we name gates by the operations thatthey perform.

Whatever a computer chip can do can be done by a network of NAND gates wiredtogether. We express the logic of wiring by another connective that we call fork, which as far as we can tell has been overlooked in formallogic. Figure2 illustrates this connective along with the associated gateoperation FORK.

### 2.3 Graphs and token games

Graphs comprised of nodes connected by arrows are used widely in science. For example,fig.3 illustrates the citric acidcycle that powers most life [13].

The concept of a computational network recognizes an unpredictable environmentthat generates inputs and receives outputs (for process-control computations,this back-and-forth communication with an environment is evident). Werepresent computational networks by graphs, in which each nodeeither represents a gate or is labeled ENV to represent an unpredictable environment thatissues inputs and consumes outputs. Each arrow points from one node to either thesame or another node, representing a connection in which the output of onenode serves as an input to that or another node. This use of graphs leads totwo developments, one physical and the other mathematical. On the physicalside, one can make a small computational network by using a drawing of thegraph as a “game board” on which to either perform NAND and FORK operations byhand or make free choices to simulate the unpredictable ENV;tokens labeled by 0 or 1 are moved in what is called a token game [15].On the mathematical side, representing computational networks bygraphs leads to an elaboration of graph theory, i.e. marked graphs, which are discussed in section3.

To play a token game “solitaire,” a single person performs the logical operations for all the nodes. However, we mainly consider multi-player token games with a player (human or otherwise) performing the role of each node. The rules of the gameare as follows.

- 1.
A node is called fireable if there is a token on each arrowpointing into it. When a node is fireable, a firing rule calls for theplayer of the node to remove the tokens on all the arrows pointing into thenode, and after that to place a token on each of the arrows pointing out ofthe node.

- 2.
Labeling rules (sometimes called coloring rules) specify how labels on output tokens depend on those on input tokens. For NAND and FORK, the labeling is specified by figs.1 and 2, respectively. An ENV node has no labeling rule; the player has a free choice to simulate the unpredictable labeling by an environment.

To show the need for the after in the firing rule, fig.4shows snapshots of a token game, before, after, and during a firing. Figure4(a) shows a fireable node $v$, whilefig.4(c) shows the graph after a firing of $v$.Figure4(d) shows what the snapshot in the midst of the move could show, without the “after” in the firing rule: $v$ is no longer fireable but hasnot been completely fired, and $y$ has been made fireable. Node $y$ can fire before completion of the firing of $v$,resulting in a return to the situation of fig.4 (a), without node $w$ever firing. Preventing such behavior necessitates the“after” condition in the firing rule.

Figure5 shows a simple example of a token game:^{6}^{6}6The drawing infig.5 resembles a digital circuit diagram, from which we borrow thesymbol for NAND.

In a second example, a (simplified) thermostat performs a repetitive computationto control the temperature of a room. As illustrated in fig.6, the“Compute” node compares a desired setting written on a token supplied by theenvironment (ENV) with the temperature on a token from a digital thermometer inthe environment; from the result of this comparison, the Compute node sends acommand on a token to ENV to turn the furnace on or off. The comparison makes use of anarithmetic adder, illustrated in fig.18 in SectionA.2.

### 2.4 Communication of tokens depends on logical synchronization

Clocks step the gates of electronic computers through phases, and a gate gatecan receive a token in one phase but not in others [16, 17].Maintaining adequate phasing of token arrivals at logic gates constitutes theirlogical synchronization, illustrated in fig.7. Analogous phasemanagement is to be looked for in biochemical cycles and the spiking of neurons.

The difference in hand positions of the two clocks indicates a propagationdelay; e.g. the clock at the arrow head shows a later time than thesending clock at the arrow tail. As discussed in [16, 17],logical synchronization can be maintained between computers without them beingset to standard time broadcasts, so long as their phases are aligned withpropagation delays.

### 2.5 Unpredictable steering to maintain logical synchronization

Maintaining logical synchronization requires (1) detecting unpredictable deviations in the phasing of token arrivals, and (2) responding to those deviations by adjusting the rhythms of the computational networks to maintain the deviations within allowed tolerances.

###### Proposition 2.

The deviations of the phasing of tokens in computational networks are undetectable by those networks; they must be detected by something else [16, 17].

###### Proof.

The logical functioning of computational networks must be indifferent to deviations in the phasing of token arrivals within their allowed tolerances.∎

Before it can be maintained, logical synchronization must be acquired so that communication can begin. It is often acquired quickly, but there is no deterministic limit on the time required to do so [18, Chaps.4 and 5].

### 2.6 Disorders of computing from failures of logical synchronization

If logical synchronization fails in the transmission of a token from one part ofa network to another, the incoming token label can be ambiguous, leading to acomputer crash. To reduce this hazard, designers interpose a “decisionelement,” based on the flip-flop shown in fig.10. However, thisdecision element suffers from a half-life of indecision as discussed insection4.2. Thus, even when filtered by the decision element, the ambiguitycan propagate through a FORK to be seen as 0 by one branch 0 and as 1 by theother branch, thereby producing a logical conflict known as glitch, whicheven filtered leads to a crash [19].

### 2.7 Preview of cognition in living organisms

In living organisms DNA codes are reproduced with astoundingly low error ratesavailable only in digital computing.^{7}^{7}7For example, “the human mutationrate …is approximately 1 mutation/10^{10} nucleotides/cell division”[13]. While living organisms also make graded movements that correspondto analog computation, without digital computation their DNA replication woulddrift enough to make life impossible[20]. Digital computation in livingorganisms confronts a physical problem—the need for logicalsynchronization—that human hardware engineers have solved (well enough) sothat software engineers do not need to know about it [21]. Many models ofliving organisms employ logic gates, but to the best of our knowledge, we arethe first to recognize the need for logical synchronization in biologicalcomputing.

Without discussing consciousness, we show that computation has a place in humancontrol of physiological processes and in human thinking. In section5 wemodel biological computation by computational networks linked to unpredictableenvironments, understanding that the networks themselves undergo unpredictablechanges, and that maintaining logical synchronization requires somethingbeyond digital computing. As represented by the marked graphs ofsection3.5, changes in computational networks are pictured incorresponding unpredictable changes in the marked graphs.We suggest that logical distance as defined in section3.3 serves as aphysiological underpinning of navigation.

## 3 Representing computational networks by marked graphs

The mathematics of marked graphs as discussed in this section is necessary in order to define the important concept of logical distance. Logical distance provides a bridge from the evolution of nervous activity to the capacity for spatial and temporal distinctions on the part of living organisms. As described in section4, marked graphs also sharpen the understanding of computational networks.See AppendixA for a tutorial on the relevantmathematics.

The marked graphs in this report express computational networks designed for repeated runs of a single computation. For this reason, our marked graphs differ from those that portray a general-purpose computer.

Marked graphs mathematically represent the logical aspect of computationalnetworks. They express not quantities such as voltages and durations, butdistinctions, such as that between the physical conditions thatconvey 0 or 1. To arrive at the marked graphs used herein, we specialize Petrinets [15] to what in their jargon are called T-nets, and we generalizethem by allowing finite, directed multigraphs with loops (as defined inAppendixA).

Representing a computational network mathematically involves negotiating a gapbetween physical motion and mathematical formulas. As described in section4,a computational network need have no moment in which all nodes are betweenfirings. However, if the token game is played “solitaire,” then each possiblesequence of moves has still moments during which no nodes are firing, and soone can take unambiguous snapshots of token positions and labels. The locationsof tokens at a moment between firings are expressed mathematically by a marking, and a labeled marking augments a marking by specifying the labelcarried by each token. While labeled markings are essential for representingcomputation, the rhythms of logical operations can be expressed by unlabeledmarkings.

We restrict our marked graphs to markings that always make some node fireable and that never put more than one token on an arrow; these are called live and safe markings. More precisely, a marking is live if every node is fireable or can be made fireable through some sequence of firings. A live marking is safe if it puts no more than one token on any arrow and if no sequence of firings starting from that marking can put more than one token on any arrow [22].Proofs of propositions in this section draw onAppendixA, which is based on [22].

The mathematical expression of the firing of a node is a relation betweentwo live and safe labeled markings: $M$ and $M^{\prime}$ are related by the firing ofnode $v$: (1) if $M$ specifies a token for each in-arrow of node $v$ and notoken for each out-arrow and if $M^{\prime}$ specifies no token for each in-arrow of$v$ and a token for each out-arrow of $v$, and $M$ and $M^{\prime}$ agree about tokenpresences on arrows not in contact with $v$, and (2) the labels on the tokens onout-arrows of $v$ follow a labeling rule for node $v$.Mathematically, a firing is a relation;nothing moves. The “snapshots” $M$ and $M^{\prime}$ exemplifythe mathematical representation of motion.

Live and safe markings are possible only for a graph that is covered by circuits.

###### Proposition 3.

The number of tokens on any circuit is the same for any pair of markings related by a sequence of firings.

3 implies that the possible live and safe markings of a graph partition into equivalence classes, and following [22] we call these classesfamilies. In discussing families, we pay no attention to labels.We then have the following proposition.

###### Proposition 4.

The number of tokens on any circuit is the same for all markings of any family [22].

### 3.1 Multiple families of markings

As illustrated in fig.8, some graphs admit more than onefamily of markings. The three markings belong to three distinct families ofmarkings, as follows from 4, because the token count of the outer(also the inner) circuit differs in the three cases. The simplest such markedgraph is shown in fig.9.)

Remark: Essentially the same topology appears in the flip-flopshown in fig.10, with its two live and safe families of markings andits two conflicting simple circuits, i.e., (A, ENV, B) and (B, ENV, A). Aninteresting question is whether digital designs can avoid multiple markingfamilies, and if so, at what cost in the number of logical operations required.

In many cases, different families of markings on the same graph representcomputations of different functions. The existence of graphs that support morethan one marking family shows that a marking family conveys information notconveyed by the underlying graph. One can ask the following question: whatconditions on a strong graph are necessary and sufficient for the existence ofmore than one family of markings on that graph? See the next subsection forpart of the answer, which may be original, given that we have not seen it elsewhere.

### 3.2 Conditions for existence of multiple families of markings

The issue arises of characterizing graphs that have a unique marking family as opposed to those that have multiple marking families.

###### Definition 1.

If a strong graph has two simple directed circuits that lace throughat least 3 nodes in opposite cyclic order, then we say that the stronggraph has conflicting simple directed circuits.

###### Proposition 5.

If a strong graph has no conflicting simple directed circuits, then it admits only a single live and safe family of markings.

###### .

Proof by contradiction.

- 1.
Any strong graph admits at least one live and safe markingfamily. Suppose that a strong graph without conflicting simple directed circuitsadmits two or more distinct live and safe marking families, say$\mathcal{M}$ and $\mathcal{M}^{\prime}$. By Theorem 12 of[22], one marking family, say $\mathcal{M}$ mustplace more tokens on some simple directed circuit $C$ than does theother marking family $\mathcal{M}^{\prime}$.By Theorem 1 of [22], $\mathcal{M}^{\prime}$ must place atleast one token on $C$, and so $\mathcal{M}$ must place at least twotokens on $C$.

- 2.
Each marking of $\mathcal{M}$ must place exactly one token on each of atleast two distinct arrows of $C$. For $M$ to be safe, each of those arrowsmust lie on a simple directed circuit (other than $C$) that holds only asingle token. Consider two such single tokens on arrows $a$ and $b$, and suppose thatarrow $a$ lies on not only $C$ but also on a simple directed circuit $C^{\prime}$, whilearrow $b$ lies on $C^{\prime\prime}$. The two arrows cannot both lie on either $C^{\prime}$ or$C^{\prime\prime}$, otherwise those circuits would not be marked by a single token.

- 3.
The markings of a family of markings can be viewed a vertices of what is called a state-transition graph (we say vertices rather than nodes, to distinguish this state-transition graph from graphs for computational networks). Exactly when a marking $M_{2}$ can be reached from a marking $M_{1}$ by firing a node of the graph, the state-transition graph has an arrow from $M_{1}$ to $M_{2}$. Examples are shown in figs. 11 and 12.

For both families of markings, [22, Theorem 7] implies that the state-transition graphs have simple directed circuits through all their markings.

- 4.
Therefore, the state-transition graphs for each of the markingfamilies $\mathcal{M}$ and $\mathcal{M}^{\prime}$ contain at least one simple directed circuit through all their markings. If $C$ isa circuit with two nodes on a graph belonging to $\mathcal{M}$marked by two tokens, the firing of either node of $C$puts two tokens on a single arrow, in violation of the hypothesis that$\mathcal{M}$ is safe. Thus $C$ must contain three or more nodes.For $\mathcal{M}$ to be safe, every arrow of $C$must lie on not only $C$ but also on a simple directed circuit markedby a single token. Deleting the arrow of $C$ from each of thosecircuits results in a set of paths that concatenate to form a simpledirected circuit containing all the nodes of $C$, but in theopposite total cyclic order, thereby violating the condition of no conflictingsimple directed circuits.

∎

Question:Does the converse of 5hold; i.e., does every strong graph with conflicting simple directed circuits admit more than one family of markings? It does in a dozen or so cases that we have explored, but we do not know whether this is so in all cases.

### 3.3 Logical distance

In special relativity, distance is defined as a count of clock ticksbetween sending a signal and receiving an echo from a distantpoint[23]. An analogous and purely mathematical, logical distancebetween nodes of a marked graph with a family of markings is the following (which we have not seen elsewhere).

###### Definition 2.

For any two nodes $v$ and $w$ of a graph with a live and safe marking family,the logical distance $D(v,w)$ is the number of times $v$ must fire inorder for a label carried by a token on an input arrow of $v$ to reach $w$ andbe returned as a label on a token on an input arrow to $v$. By convention, if$v$ and $w$ are identical, then $D(v,v)=0$.^{8}^{8}8Logicaldistance is distinct from path length in graphs.

###### Proposition 6.

Suppose that $v$ and $w$ are any two nodes of a connected graph with a liveand safe marking family. Consider a paths $vw$ and $wv$, and count the tokenson each path, then the sum of those two counts is invariant under nodefirings. Let $n$ be the least such sum, then the logical distance $D(v,w)$ is$n$.

###### Proof.

For any connected graph with live and safe marking, there are directed paths from $v$ to $w$ and from $w$ to $v$. First, consider the special case of a graph that is a simple directed circuit marked by a single token.^{9}^{9}9“Simple directed circuit” is defined in SectionA.1The firing of a node of the circuit makes the next node fireable, so that a sequence of firings generates a sequence of markings in which the token circulates repeatedly around the nodes. By inspection, for any two nodes $v$ and $w$ of the circuit, the sum of tokens on the two paths is 1 which is $D(v,w)$.

The general case splits into cases (a) and (b). In case (a) the path from $v$ to $w$ has no arrows in common with the path from $w$ to $v$, so that the union of the two paths is a simple directed circuit, and the number of tokens on this circuit is independent of node firings.When $v$ fires, the return is carried by a label on the fired token that circulates around the simple directed circuit, so that for a marking with $n$ tokens on the circuit, starting from a token on an in-arrow of $v$, $n$ firings of $v$ produce the return, and the logical distance is $n$.

In case (b) the path from $v$ to $w$ has one or more arrows in common with the path from $w$ to $v$, so that the union of the two paths is not a directed circuit, as illustrated in the paths from $v_{0}$ to $v_{5}$ and $v_{5}$ to $v_{0}$ in fig.13.However, the number of tokens on a directed path in a marked graph changes by +1 when the initial node fires and by $-1$ when the terminal node fires, and is invariant under firing of its other nodes. The terminal node of the directed path from $v$ to $w$ is the initial node of the path from $w$ to $v$ and vice versa, so that the sum of tokens on the two paths is invariant under all node firings, and the same logic as for case (a) shows that that sum is $D(v,w)$. Note that a token on an arrow that belongs to both paths will be counted twice.∎

In fig.13, the token count of the path from $v_{0}$ to $v_{5}$ is 1, while the token count of the path from $v_{5}$ to $v_{0}$ is 2, so the logical distance is 3; the token on arrow $a_{2}$ is on both paths and therefore is counted twice.Figure14 shows an added path that eliminates the need for the overlapping paths of fig.13 and reduces the logical distance to 1.

###### Proposition 7.

The logical distance $D$ is a metric on nodes of a graph with a given family of live and safe markings.

###### Proof.

For any two distinct nodes of a graph with a live and safe marking family, the definition in 6 guarantees a positive logical distance. By theargument in the proof of 6, the logical distance is invariant under swapping the two nodes, hence symmetric. The triangle inequality follows from concatenating paths, fulfilling the conditions of a metric.∎

#### 3.3.1 Logical distance depends on the family of markings

###### Proposition 8.

For a given graph, the logical distance between two nodes can depend on the family of markings.

###### .

Proof by example: By 4, the markings shownin fig.8 belong to distinct families. The family that puts two tokenson the outer circuit leads to $D(v_{0},v_{3})=2$, while the family that puts threetokens on the outer circuit leads to $D(v_{0},v_{3})=3$.∎

### 3.4 Coarse representations with generalized nodes and labels

At the finest level of representation, the only nodes are NAND, FORK, and ENV, with token labels 0 or 1, and such graphs can be huge. To modularize them,fragments of a graph can be contracted to make a graph that expresses a coarser level of detail.^{10}^{10}10Contractions of graphs are a special case of graph morphisms.In contrast to an unmarked graph, contracting a marked graph requiresattention to the effect of the contraction on markings and their labels.After contracting a fragment to a node of a coarser graph, the labeling rule for the new node can define a complex logical operation on strings of bits rather than just single bits.

Sometimes we omit token labels, and fig.15 shows contractionswith token labels omitted.

Labeling rules for contractions can involve memory in addition toinputs and outputs. The allowance of multigraphs with loops, rather than thesimple graphs usual for Petri nets, simplifies contractions.SectionA.3 uses contractions in showing howmarked graphs can express the “if-then-else” construct used in computer programs.

### 3.5 Representing the unpredictable evolution of computational networks

Computational networks can encounter unpredictable needs for revisions. Forexample, two computational networks can come into communication, turning into asingle network; going the other way, a computational network can fission intotwo networks. Some revisions involve sequences of fusion and fission, both ofwhich can be represented by before-and-after snapshots expressed by pairs ofmarked graphs. The simplest cases of (1) fusing nodes of differentcomputational networks into a single node and (2) fission of a single node areshown in fig.16.

Other changes are the injection of a node that splits an arrow, or the deletion of a note that separates two arrows along a path.Left for future exploration are morphisms of labeled marked graphs, which are complicated by multiple marking families.

## 4 Marked graphs and the local physics of computation

Taking computation as physical behavior that can be measured, we ask thefollowing question. What evidence from observations of a computational networkcan determine the graph and the marking family that represent the computationalnetwork? A first thought regarding obtaining evidence could be to takesnapshots of the computational network and relate those snapshots to markings ofthe graph, as in fig.5. However, there is an obstacle, alluded to insection3. Although a sequence of unambiguous snapshots can record a chessgame, the absence of ambiguity depends on moments between moves when all thepieces sit still on the game board. In computational networks, while everyindividual gate performs sequentially, there need be no moment when all gatesare between moves, as illustrated in fig.17.

The center of fig.17 shows a possible marking of the “Hex” graph, butthe surrounding snapshots illustrate a play of a token game in which some nodeis always in the midst of firing, so that token positions never correspond toany marking, even though the node firings conform to the firing rule. Themarked graph at the center of fig.17, has three nodes that are concurrently fireable, which obstructs any correspondence between markingsand snapshots of the physical network. One way to obtain a correspondencebetween such a computational network and the marked graph employs a suitablycoordinated set of local snapshots, as we now discuss.

### 4.1 When global snapshots fail, local records work

When global snapshots fail, local records can be organized to determine thegraph and the marking family for a computational network. For example, say eachnode makes a record each time if fires. The record includes a count of thenode’s own firings and its own name. It writes these on the labels of thetokens that it produces, so that its own record of a firing can include thenames and counts that come in the labels on its input tokens. Without thecounts, local records (once collected) determine the graph but not the familyof markings. With the counts, the marking class can be determined. For theexample in fig.17 the following display of records suffices todistinguish among the three families of markings shownin fig.8. ^{11}^{11}11When more than one arrow connectstwo nodes, the arrow names must also be included in the local records.

Marking family 2-4 $\Big{|}$ Marking family 3-3

$V_{1}$#1; source $V_{2}\#0,V_{0}\#0$; target $V_{2},V_{0}$ $\Big{|}$$V_{1}$#1; source $V_{2}\#0,V_{0}\#0$; target $V_{2},V_{0}$

$V_{3}$#1; source $V_{2}\#0,V_{4}\#0$; target $V_{2},V_{4}$ $\Big{|}$$V_{3}$#1; source $V_{2}\#0,V_{4}\#0$; target $V_{2},V_{4}$

$V_{2}$#1; source $V_{1}\#1,V_{3}\#1$; target $V_{1},V_{3}$ $\Big{|}$$V_{2}$#1; source $V_{1}\#1,V_{3}\#1$; target $V_{1},V_{3}$

$V_{4}$#1; source $V_{3}\#1,V_{5}\#0$; target $V_{3},V_{5}$ $\Big{|}$$V_{4}$#1; source $V_{3}\#1,V_{5}\#1$; target $V_{3},V_{5}$

$V_{5}\#1$; source $V_{4}\#1,V_{0}\#0$; target $V_{2},V_{0}$ $\Big{|}$$V_{5}\#1$; source $V_{4}\#0,V_{0}\#0$; target $V_{2},V_{0}$

$V_{0}$#1; source $V_{5}\#1,V_{1}\#1$; target $V_{5},V_{1}$ $\Big{|}$$V_{0}$#1; source $V_{5}\#1,V_{1}\#1$;target $V_{5},V_{1}$

The local records link markings only logically and not temporally. The logical links determine the families of markings, shown in the state-transition graphs of figs. 11 and 12.

### 4.2 Physical undecidability of temporal order of concurrent events

Deciding physically on the temporal order of concurrent events has known troubles that challenge the very concept of a time coordinate valid over a computational network.For example, in the computational network illustrated in fig.17, there can be three nodes firing more or less at once, corresponding to a marking with these nodes concurrently fireable. Determining temporal order is not only a race situation of “who comes in first” but a three-way race.The following points are shown in [17]:

- 1.
Physically measuring the order of occurrence of three events $A$, $B$, and $C$—such as three token firings—requires measuring pairwise orders, i.e., between $A$ and $B$, between $A$ and $C$, and between $B$ and $C$.

- 2.
Because of uncertainty in pairwise decisions, attempts to assign temporal order to three concurrent firings can violate the transitivity of an order relation by outcomes of $A$ before $B$, $B$ before $C$, but $C$ before $A$ rather than $A$ before $C$.

Something else physical impinges on deciding temporal order, namely the “time required to decide.” The mechanical and electronic devices used to decide “which came first” depend on balancing one effect against another. These devices can be built so that eventually a decision emerges, but balancing can linger indefinitely, like a tossed coin that lands on edge, lingering, before it eventually falls one way or the other [19]. In the statistics of trials of these devices one sees a half-life of indecision, meaning that the probability of lingering indecision decreases by half after every successive elapse of the half-life of indecision. An example of measured half-life is described in [2]. This is not a new result, but a retrieval.In a footnote to his 1905 paper introducing special relativity, Einstein wrote of the thin ice to be crossed to connect his concept of time with experience:

The inexactness that lurks in the concept of thesimultaneity of two events at (approximately) the same placemust be bridged by an abstraction that will not be discussed here (our translation) [23].

## 5 Cognition in living organisms

We propose to view cognition in all life, from the basal cognition of bacteriato human thinking, as involving both computation and something unpredictable,beyond computation. Where we say computation, many authors say “informationprocessing.” The term ‘information’ acquires its scientific precision fromShannon’s theory [24], according to which information is conveyedover channels, which we represent by the arrows of marked graphs.Information theory is silent about the construction, maintenance, anddissolution of channels, confining itself to their capacity to conveyinformation in the face of noise. From our point of view, a computationalnetwork entails a network of channels. The part of cognition that is beyondcomputation corresponds to the unpredictable construction, maintenance anddissolution of channels. As an example of basal cognition in the slime moldPhysarum polycephalum, we point to the information conveyed in cytoplasmflow through channels formed of actin and myosin in oscillatory interaction[25]. Actin and myosin form tubular networks that are in a continualflux of assembly and disassembly, represented in the language marked graphs byunpredictable changes in these graphs.

We contrast computational networks with the modeling of brains, which is a long-standing topic going back to McCulloch and Pitts [26] and continuing to the present [27]. Rather than modeling brains, we primarily model computation itself, whether done by electronics or by a living organism, as in “lumpy” biochemistry, suggested long ago by Schrödinger for the transmission of genetic information [20]. As meant herein, the elements of computational networks are not anatomical elements but logical elements. We avoid discussing how logical elements relate to anatomy.

Computational networks are free of anyassumption of central clocking found in most computers but absent in livingorganisms. Instead, as discussed in sections2.4 and2.5,computational networks depend on logical synchronization, maintained by something beyond their logic, calling for attention in neuroscience to logical synchronization. The modeling of computational networks benefits from their mathematical representation by marked graphs subject to extra-logical revisions. The marked graphs enable two technical results: (1) the existence of multiple families of markings, implying multiple modes of operation for computational networks,as discussed in sections3.1 and3.2, and (2) the concept oflogical distance, defined in section3.3 .

### 5.1 Modes of operation in computational networks

A description of logic gates and their interconnections covers the staticstructure of a computational network, but more is needed to describe itsfunction. Most computational networks can behave in any of various modes whenthey are first established, even when only a single mode of behavior is desired.An analogy is waking up and being momentarily disoriented: seeing things“wrongly.” While graphs without markings are used widely in biology, it isonly marked graphs that can distinguish among multiple modes of operation.

For example, multiple modes of operation are possible for a computationalnetwork corresponding to a figure 11.3 shown in Rhythms ofthe Brain, [p.286][28] and captioned “Multiple excitatory glutamatergic loops inthe hippocampal formation and associated structures.” That figure represents a partof a computdational network that contains conflicting simple directed circuits(defined in SectionA.1). Inspection shows that that computationalnetwork admits more than one mode of operation, which implies multiple possiblerhythms of node firings.

### 5.2 Logical distance in biological computing

The marked graphs suggest a biological evolutionthat allows people to develop a concept of distance. In physics, distance isdefined by the time required for to-and-from communication. For example, radarmeasures distance by the time from sending a signal to receiving an echo[29], and this is the electric version of what bats and porpoises do withsound. As described in section3.3, an analogous logical distance is thenumber of firings of a node that must occur for a label produced by the node ofa computational network to circulate to a distant node and come back to thesender. We conjecture that in animal physiology, logical distance is essentialto the capacity to navigate and to the evolution of the concepts of time anddistance in people.

### 5.3 Logical synchronization in human conversation

I (JMM) like logical synchronization as a metaphor for “connecting” in humanconversation. Sometimes a friend and I are “synched”: we hear each other.At other times, in some conversations on Zoom, no one hears what anyone else is saying.In both digital technology and in human conversation, the“clicking into synchrony” so that communication is possible comes when itcomes; it is unpredictable and can’t be hurried (although the digital time scale isnanoseconds and the human time scale is something else).

I picture my own thoughts as comprising fragments of computational networks in a flux of unpredictable modifications and interactions, coming into and out of logical synchronization. As emphasized in section4.2, there is no universal time step to the rhythms of logical operations and their modifications. At unpredictable moments, fragments of networks of thought can meet, fuse with one another, possibly to fission later, responding sometimes to spontaneous inner life and other times to external stimuli.

## 6 Discussion

The proof of guesswork brings the workbench of experiment out from the shadow of theory. This renewed respect for experimental practice draws theoretical attention to the physical clocks that make time: to tick closely together, they need adjustments guided by guesswork that anticipates unpredictable deviations. Attention to the hardware of digital computation and communication merges these into computational networks, which depend on logical synchronization, maintained with the help of adjustments, again guided by guesswork that anticipates unpredictable deviations.

A salient feature of electronic digital hardware is its dependence on physically distinct conditions, interspersed with transitions among these conditions. It is the handling of distinctions that makes necessary the logical synchronization of computational networks. The recognition of a physics of distinctions leads to the representation of computational networks in the mathematics of graphs with live and safe marking families. This representation reminds one that contrary to a linear “begin–end” view of computation, computing is circulatory in a cycle of change, test, and debug, and the proof elevates debug to a necessity. By recognizing the unpredictability of changes, this cycle is a cartoon of the scientific approach, and the same cartoon speaks for biological evolution.

Our formulation of computational networks is deliberately restricted to theirlogical aspect, and this makes computational networks and their unpredictablerevisions suitable for characterizing biological intelligence.

Computational networks are physical mechanisms designed for logical operations represented mathematically by marked graphs. The physical mechanisms must move but the mathematically defined marked graphs, like any formulas, sit still on the page. A gap between movement and stillness has to be bridged by interpretation beyond the expressive power of mathematics. That interpretation allows a succession of snapshots to evoke the seeing of motion. A point worth emphasizing is the demonstration in section4 of a computational network that admits of no sequence of unambiguous global snapshots but can be related to a marked graph by local snapshots.

Guesses are sometimes private to a person, but they also occur in social contexts, which is another topic entirely. Of the guesses that shape my life, I (JMM) borrow many from other people, living or dead.I associate with guesses a role assigned by D. W. Winnicott to “illusory experience” as a “natural root of grouping among human beings” [30, Chapter 1.]. From our proof of the necessity of guesswork, what is fairly called illusory is not the difference among guesses that shape different groups, but the assumption that any one guess can have an absolute claim to being right.

## Acknowledgments

We learned about marked graphs from Anatol W.Holt and Frederick Commoner, anddraw extensively on [22]. We are indebted to the lateC.A.Petri for spurring us to think hard about clocks in relation tologic. We are indebted to Samuel J. Lomonaco, Jr.for helpful discussions andespecially for pointing us to multi-graphs. We thank Kenneth A. Augustyn forsubstantial help in organizing the paper. Thanks to David Mumford, MarcusAppleby, and Johann Summhammer for comments on an earlier draft. We aregrateful to György Buzsáki for explaining one of his graphs of neuralconnections. We thank Stefan Haar for introducing us topartial cyclic orders and for insightful comments.

## Appendix A Appendix: Mathematics of live and safe marked graphs

Most of what we have to say about marked-graph mathematics is borrowed from [22], in which a marked graph is defined as a finite multi-graph (without loops) along with markings related by a firing rule.We augment those marked graphs by labeling tokens according to the labeling rules for NAND and FORK, as in figs.1 and2 (labels on out-arrows of ENV are unpredictable).^{12}^{12}12Marked graphs with labels are a specialization of colored Petrinets [15], except that we allow multi-graphs with loops.In addition we make the following two changes.

- 1.
We allow loops in multi-graphs, and we claim that the proofs in [22] carry over to multi-graphs with loops.

- 2.
In emphasizing “snapshots,” we explicitly flag the issue of associating timeless mathematical propositions with the physical motion of logical operations.

### A.1 Glossary and properties of marked graphs

The definitions of terms for graphs vary among authors. Herein we use terms adapted mostly from [31].

###### Definition 3.

A graph consists of a finite set of elements that we call nodes and a finite set of elements that we call arrows, along with a function that assigns to each arrow an ordered pair of nodes, (as illustrated in preceding figures).^{13}^{13}13The words node and arrow evoke experiential associations. Formally one has the following definition: a graph consists of a pair of finite sets and a function that assigns to each element of the first set an ordered pair of elements of the second set. However, that is more formal that we are prepared to be.

We say that an arrow connects the first node of its pair to the second node.The first node is the source of the arrow, and the second node is thetarget of the arrow. An arrow is an out-arrow of its sourcenode and an in-arrow of its target node. The source and target nodes ofan arrow need not be distinct, in which case the arrow is a loop.More than one arrow can connect a given pair of nodes. In the jargon, our graphs are “finite, directed multi-graphswith loops.”

A node that is a source or a target (or both) of an arrow is incident with the arrow.

A directed path in a graph is a sequence of arrows joined head to tail at nodes, with no repeated arrows.

A simple directed path in a graph is a directed path in which all nodes are distinct.

A circuit is a closed directed path (which can be a loop)

A simple directed circuit is a closed simple directed path.

A graph is strongif it contains a directed path from every node to every other node.

An induced subgraph of a graph $G$ is any graph consisting of a subset of nodes of $G$ and all the arrows of $G$ that connect pairs of those nodes.

We call a simple path from a node $v_{1}$ to a node $v_{2}$ a 1-1 path if (a) it consists of a single arrow or (b) its nodes other than its end points each have one in-arrow and one out-arrow.

We call a node with one in-arrow and one out-arrow a1-1 node.

###### Definition 4.

An unlabeled marking of a graph is an assignment of a (non-negative)whole number, called the token number, to each arrow. In this report,that number is either 0 or 1. A labeled marking accompanies anassignment of 1 (for token presence) by a label consisting of a string of 0sand 1s, possibly just a single 0 or 1.

A node is fireable in a marking if the token number on each in-arrow of the node is at least 1.

For an unlabeled marked graph, a firing of a node is a relation between two markings: markings $M$ and $M^{\prime}$ are related by the firing of node $v$ if the token number of $M^{\prime}$ for each in-arrow of node $a$ 1 lower than the corresponding token number for $M$, and if the token number $M^{\prime}$ for each out-arrow of node $v$ is 1 higher than the corresponding token number for $M$.

A marking is called live if every node is either fireable, or can be made so through some sequence of firings.

###### Definition 5.

A graph is strong if it has a directed path from every node to everyother node.

From [22, Theorem 12] we have the following proposition.

###### Proposition 9.

If a live marking M’ of a strong graph can produce M (through a sequence of firings), then M can produce M’.

Hence, “live markings of a strongly connected graph partition into equivalence classes. …Let us refer to each equivalence class as afamily” [22, Thm. 12]. It follows that any marking of a family can be produced from any other by some sequence of firings.

We call the sum of token numbers over each arrow of a circuit the token count of the circuit. Because the firing of a node belonging to a circuit balances the decrease in the token number on the in-arrow of the node with the increase in the token number on the circuit out-arrow, we have the following proposition.

###### Proposition 10.

The token count of a circuit is the same for markings related by firings, and hence the same for all markings of any family [22].

To use marked-graphs to represent interconnected logical operations one must restrict the graphs and the markings to avoid the piling up more than one token on any arrow; that means requiring safe markings.

###### Definition 6.

A marking is called safe if no arrow has a token number greater than 1, and if no sequence of firings can lead to a token number greater than 1 on any arrow.

To represent computing mathematically, we invoke strong graphs with live and safe families of markings. Only for strong graphs are live and safe markings possible.^{14}^{14}14Among graphs that are connected in the weak sense that ignores arrow directions, a graph can have a live and safe marking if and only if it is strongly connected.

###### Definition 7.

If, for a marking $M$, two or more nodes are fireable, then we say those nodes are concurrently fireable in $M$.^{15}^{15}151) Defining concurrency for marked graphs is simpler than for Petri nets.

2)The token game corresponding to an marked graph with a family of live and safe markings can be played solitaire to generate a repeating cycle of firings. In such a cycle each node of the graph fires once. For graphs on which concurrent firings are possible, there are multiple such cycles of firings. This corresponds to a partial cyclic order as defined by Stehr [32], but Stehr’s definition does not always distinguish the cyclic orders corresponding to distinct families of markings, nor does the somewhat different definition by Haar [33].

### A.2 Fragments of graphs for addition, illustrating graph contractions

Numerical comparisons (as in the thermostat of fig.6) involveaddition. Figure18 shows addition performed by gates[34], and itillustrates coarse descriptions using contracted fragments of graphs.

### A.3 Program control in marked graphs

While our purpose is not to express general-purpose computers, this can be doneby marked graphs that have paths of token propagation under program control.For instance, fig.19(a) pictures a contracted fragment of a graphrepresenting a switching element controlled by input c. If c is 0,then the token label of $X$ goes to $A$ and the token label of $Y$ goes to $B$, whileif c is 1, then the token label of $X$ goes to $B$ and the token label of $Y$goes to $A$. Figure19(b) shows the switching element as an uncontractedfragment of a graph.

Figure20 shows how the input ${\bf c}$ from the environment controls the routing of other inputs x and y through a computation.

### A.4 Counting by variation in token labels

Asymptotically, all nodes fire equally frequently, but token labels allowthe expression of rhythms of bit values. For example, fig.21shows a marked graph in which node $V_{7}$ produces a 1 bit every third cycleand a 0 bit on the two cycles in between.

## References

- [1] Phillip M. Morse and Herman Feshbach, Methods of TheoreticalPhysics (McGraw-Hill, New York, 1953).
- [2] F.Hadi Madjid and John M.Myers,“Matched detectorsas definers of force,” Ann.Physics 319, 251–273(2005); https://arxiv.org/abs/quant-ph/0404113; preceded byJ. M. Myers and F. H. Madjid, “A proof that measured data andequations of quantum mechanics can be linked only by guesswork,” inS. J. Lomonaco Jr. and H.E. Brandt (Eds.) Quantum Computation andInformation, Contemporary Mathematics Series, vol. 305, AmericanMathematical Society, Providence, (2002), pp.221–244;https://arxiv.org/abs/quant-ph/0003144.
- [3]Walther Gerlach and Otto Stern, “Der experimentelle Nachweis der Richtungsquantelung im Magnetfeld.” Zeitschrift für Physik. 9 (1): 349–352 (1922). https://link.springer.com/article/10.1007/BF01326983
- [4] Carl W. Helstrom, Quantum Detection andEstimation Theory, (Academic Press, New York, 1976).
- [5] Alexander Holevo, Probabilistic and StatisticalAspects of Quantum Theory, (North-Holland Publishing Co.,Amsterdam, 1982).
- [6] Michael A. Nielsen and Isaac L. Chuang Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2000).
- [7] J. M. Myers and F. H. Madjid, “Ambiguity inquantum-theoretical descriptions of experiments,” in K. Mahdavi andD. Koslover, eds., Advances in Quantum Computation, ContemporaryMathematics 482, 107–123 (American Mathematical Society, Providence,RI, 2009); https://arxiv.org/abs/1409.5678
- [8]John M. Myers, “Conditional probabilities and density operators in quantum modeling,” Found. of Phys.36, 1012–-1035 (2006). https://doi.org/10.1007/s10701-006-9053-0
- [9] Andrë Malraux, Picasso’s Mask (Holt, Rinehart and Winston, New York, 1976).
- [10] NIST, Time and Frequency Division, “Time realization” 2023. https://www.nist.gov/pml/time-and-frequency-division/time-realization
- [11] D. W. Allan, “Time and frequency (time-domain)characterization, estimation, and prediction of precision clocks andoscillators,” IEEE Transactions on Ultrasonics, Ferroelectrics, and FrequencyControl, UffC-34, 647–654 (1987).
- [12]Wikipedia. 2024. “Logic level,” https://en.wikipedia.org/wiki/Logic_level
- [13] Bruce Alberts et al., The Molecular Biology of the Cell 6th ed. (Garland Science, New York 2015).
- [14]Wikipedia. Citric acid cycle (2023).https://en.wikipedia.org/wiki/Citric_acid_cycle
- [15]Carl Adam Petri, “Nets, time and space.” in Theor. Comput. Sci. 153, 3–-48 (1996).
- [16] John M. Myers and F. Hadi Madjid, “Distinguishing betweenevidence and its explanations in the steering of atomic clocks,” Annals ofPhysics 350, 29–49 (2014); https://arxiv.org/abs/1407.8020.
- [17]John M.Myers and F. Hadi Madjid, “Synchronization of symbols as theconstruction of times and places,” Meas. Sci. Technol. 31 025106(2020) (Open access). https://doi.org/10.1088/1361-6501/ab50dc
- [18] Heinrich Meyr and Gerd Ascheid, Synchronization inDigital Communications (Wiley, New York, 1990).
- [19] T. J. Chaney, C. E. Molnar, “Anomalous Behavior of Synchronizer and Arbiter Circuits,” IEEE Trans. Comput. C-22 421–422 (1973). https://doi.org/10.1109/T-C.1973.223730;B. Cheney and R. Savara, “Metastability in SCFL” IEEE Gallium Arsenide Integrated Circuit Symposium 17th AnnualTechnical Digest (1995); https://ieeexplore.ieee.org/iel3/4051/11605/00529020.pdf. T. J. Chaney, C. E. Molnar, “Anomalous Behavior of Synchronizer and Arbiter Circuits.” IEEE Trans. Comput. C-22 421 (1973).
- [20] Erwin Schrödinger, What is life? (CambridgeUniversity Press, Cambridge, UK, 1944).
- [21] Kenneth A. Augustyn, personal communication, 2024.
- [22]Frederick Commoner, Anatol W. Holt, S. Even, and A. Pnueli,“Marked Directed Graphs,” Journal of Computer and System Sciences 5,511-523 (1971).
- [23] Albert Einstein, “Zur Elektrodynamik bewegterKörper, Annalen der Physik, 17, 891–921 (1905).
- [24]Claude E. Shannon, “A mathematical theory ofcommunication,” The Bell System Technical Journal, Vol. 27,pp. 379–423, 623–656, July, October, 1948.
- [25]Chris R. Reid, “Thoughts from the forest floor: a review of cognition in the slime mould Physarum polycephalum” Animal Cognition26, 1783–1797, (2023)https://doi.org/10.1007/s10071-023-01782-1
- [26]Warren S. McCulloch and Walter Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys., 5, 115–133 (1943).
- [27]Nikolaus Kriegeskorte, Pamela K. Douglas, “Cognitive computational neuroscience.” Nat Neurosci 21, 1148–1160 (2018). https://doi.org/10.1038/s41593-018-0210-5
- [28] György Buzsáki, Rhythms of the Brain, (Oxford University Press, New York, 2006).
- [29]Merrill L. Skolnik. Introduction to Radar Systems (McGraw-Hill, New York, 1980).
- [30]D. W. Winnicott, Playing and Reality. (Routledge Classics, London and New York, 2005 [1971]).
- [31] Robert Sedgewick and Kevin Wayne. Algorithms (4th ed.). (Addison-Wesley, Reading, MA, 2011). Chap. 4.2
- [32] Mark-Oliver Stehr, “Thinking in cycles,” in: J. Desel and M. Silva (eds), Proc. 19th ICATPN, LNCS 1420:205–225, Springer 1998.
- [33] Stefan Haar, “Cyclic ordering through partial orders,” Journalof Multiple-Valued Logic and Soft Computing 27 (2-3), 209–228(2016). http://www.oldcitypublishing.com/journals/mvlsc-home/mvlsc-issue-contents/mvlsc-volume-27-number-2-3-2016/.
- [34] Wikipedia. 2020. “Adder (electronics).” Retrieved from https://en.wikipedia.org/w/index.php?title=Adder_(electronics)&oldid=975743905