1. Home
  2. Docs
  3. Documention
  4. Proof-of-Time Consensus Protocol

Proof-of-Time Consensus Protocol

Instead of evaluating the computational efforts (proof-of-work) or the number of coins the validator has staked in the network (proof-of-stake), the PoT consensus algorithm allows anyone to become a validator.
The PoT protocol is simply a chain of computations that allows the network to create immutable and verifiable time data. It uses a cryptographically secure function that ensures that output cannot be predicted from the input, as shown in figure 1:

Figure 2: Overall architecture of the Analog network

When broadcasters submit time data to the platform, the selected time node (validator) sequences the time data and orders them such that consensus nodes can efficiently process them. It executes the time data transactions on the current status of the ledger, signs the time data transaction, and publishes it as the final state for further processing by the consensus nodes.
Consensus nodes execute the same time data transaction on their copies of the network’s state and publish their computed signatures as confirmations. The published confirmations serve as votes for the PoT consensus algorithm. If more than two-thirds of all the consensus nodes vote to accept the validated time data, the time data gets appended to the Timechain.

Causality Principle

The Analog platform uses one of the key notions concerning time data to build a tamper-proof Timechain: causality. The expression X→Y means that “X happens before Y,” i.e., event X occurs first, then all the processes concur that event Y will take place. Consider these two scenarios:
  • If X and Y are some events within the same process and X happens before Y, then the assertion X→Y is valid.
  • If X represents an event for a message transmitted by one process and Y is an event of that message received by another process, then the assertion X→Y is also correct. Essentially, this means that data cannot be received before it is sent. And even if these events were to take place simultaneously, it would take a finite non-zero time.
The pre-occurrence relationships are also in a transition relation. For example, if X→Y and Y→Z then X→Z can be proved. If events X and Y happen in different processes, then neither X→Y nor Y→X is valid, in which case they are independent or concurrent. It is these causality principles that the Analog network uses to create an indisputable history of events.


A node is one of the multiple devices that connect in a peer-to-peer (P2P) fashion in the Analog network. Nodes run the Analog protocol software and can store partial or complete copies of the Timechain. There are three categories of nodes:


A broadcaster is any node that submits time data to the Analog’s marketplace. Once validated and appended the Timechain, subscribers (consumers) can use broadcaster’s time data to power Analog’s inherent DApps, DApps implemented from other chains, microservices, and other and intelligent data pipelines.
Analog network has an inbuilt tokenization model for time data monetization that incentivizes broadcasters’ participation on the platform. Subscribers and other companies can buy valuable time data from broadcasters, creating a global time data marketplace pipeline with built-in provenance, access control, and privacy.

Time nodes

A time node is a light node in the Analog network that can submit time data and validate time data transactions (checking the validity of the submitted time data). Time nodes can also participate in the consensus process, i.e, they agree about the present status of the Timechain. When it comes to validating time data, the selection of time nodes is linear to their ranking scores (RSs).
A time node collects submitted time data, authenticates it, packages the validated records on a new block, and then broadcasts the output to the network. The platform uses variables such as time relevance, reputability, and average weighted average of vicinity nodes to select time nodes.
Therefore, any node can become a time node, provided it has high time relevance, reputability, and other nodes within its vicinity can attest to its viability.

Consensus nodes

These are light nodes that participate in the consensus process. To achieve consensus, the consensus nodes must have downloaded the previous 500 blocks of the Timechain. Like time nodes, less computationally-intensive devices like IoT, smartphones, and embedded devices can also reach agreements on the status of the Timechain because consensus nodes are lightweight.
The Analog network uses a random validator sub-sampling technique to select 1,000 nodes that verify submitted time data from time nodes. The network can only append the validated time data to the Timechain if more than two-thirds of the nodes in the consensus committee vote to accept it.

Archive nodes

These are full nodes that store the entire Timechain. The archive nodes can also participate in the consensus process. They can help users to examine past states of the Timechain and use the history to develop other applications. These nodes require a special server that meets storage requirements.

Trust Index

The Analog network provides a natural defense against broadcasters that may attempt to submit fake data or act in a byzantine manner using the trust index algorithm. The trust index measures the consumers’ trust in the time data that a given broadcaster submits to the network.
The entire Analog ecosystem is essentially a trust-based graph where nodes represent consumers and broadcasters while edges denote trust relations between them, as shown below:

Figure 4: Structure graph notation for the Analog network

A trust relation from node A to node B shows how much trust B places in A. Thus, an edge in the network has an associated trust value as its weight. When a broadcaster onboards on to the Analog network, the protocol assigns it a trust index of 0.
There are two primary ways the network can derive the trust value:

Direct interactions between broadcasters and consumers. For example, Marriott Hotels can use Uber’s submitted and validated time data to check its guests in. This creates a trust relation between Uber and Marriott. Any time Marriott Hotel successfully uses Uber’s submitted and validated time data, the network increments Uber’s trust index by 1.

Referrals (trust propagation). By intuition, a good recommendation for a consumer, say X, is connected by many of X’s neighboring nodes. For example, in figure 3 above, E has B and D as its neighbors. As such, B and D can recommend E to use A’s submitted and validated time data. When this happens, the network establishes a trust relation between A and E. Like the case with direct interactions, the network increments A’s trust index by 1 anytime E successfully uses A’s submitted and validated time data.

Trust Index Algorithm

The protocol relies on the notion of transitive trust. If consumer (a) trusts any broadcaster, say (b), then the consumer would also trust other broadcasters trusted by (b). Each consumer computes the local trust index (LTIab) for all the broadcasters that have provided it with accurate or fake time data based on satisfactory or unsatisfactory time data transactions that it has had.
Mathematically, we can represent this as follows:
LTIab is the local trust index that consumer a attaches to broadcaster b.
satisfied(a,b)refers to the number of accurate time data that consumer a has received from broadcaster b; and
unsatisfied(a,b)denotes the number of erroneous or fake time data that consumer a has received from broadcaster b.
To prevent malicious consumers from assigning arbitrarily high trust indices to colluding broadcasters or low trust indices to honest broadcasters, the local trust index value gets normalized as follows:
todayy 1
Where NTIab is the normalized local trust index.
max(LTIab,0) is the aggregate of local trust indices.
The protocol aggregates the local trust indices in a decentralized manner to create a trust vector for the entire network. Using the notion of transitive trust, consumer a would ask other broadcasters or consumers it knows to provide the trust value of any broadcaster, say c, and weigh responses that these nodes offer.
todayy 2
Where TIacis the overall trust index that other nodes place in broadcaster c.

Ranking Score

A ranking score is a numerical weighting measure that the algorithm assigns to each time node in the network. Unlike the trust index that applies to broadcasters (showing the trust that consumers have on them), RS determines the time nodes’ relative importance within the validators’ set.
The trust index allows the network to deter malicious broadcasters from submitting fake time data to the marketplace. On the other hand, RS is an essential feature of PoT that determines which time nodes get the chance to validate broadcasters’ submitted time data. PoT uses three metrics to assign an RS to each validator node:

1. Node’s tenure as a participant on the Timechain

This is an overall contribution of each time node on the network. It indicates how other nodes in the network perceive the time node in terms of positive feedback on the Timechain. The algorithm uses cryptography to ascertain a score for each time node based on its past events on the Timechain.
The network uses crypto-economic concepts, including the tokenization mechanism, rewards, and penalties, to assign a score to each time node node. Like most Blockchains, the first (genesis) transaction in a block is a unique time data that starts the Timechain.
This creates economic incentives for other nodes to validate the network and distribute ANLOG tokens into circulation because there is no centralized entity to verify and issue time data.
This is analogous to gold miners that expend time and resources to add gold into circulation. In Analog’s case, it is verifiable time data that time nodes submit to the Timechain. The platform, in turn, incentivizes validators with transaction fees and rewards. For each validator, the network assigns a value based on the fees/rewards that the node has received over time.
In this case, the more fees the time node has received, the higher the value for the node’s tenure on the Timechain. Besides fees and rewards, the algorithm also uses the number of ANLOG tokens the node has staked in the network to determine each node’s perceived value. This means the more coins the validator has staked in the network, the higher the value for the node’s tenure.
The network can also penalize bad behavior. If a particular time node attempts to be dishonest and the Analog network ascertains this, the platform simply discards the block and reverts to the longest Timechain. This reduces the overall score for the node’s tenure on the Timechain, which means a lower RS.

2. Node’s historical time data validation accuracy

This metric uses the number of times the time node has accurately validated submitted time data. Like a node’s tenure, the historical time data validation accuracy will rely on cryptographic primitives inherent in the Timechain. The Timechain consists of a sequence of blocks linked together via a hash value.
The hashing algorithm runs sequentially, with its previous output becoming the next current input, periodically creating the current output. The protocol then time-stamps each block on the sequence by appending time data (or the hash of the previous block) into the state of the function.
This generates verifiable time-stamps that guarantee that time data was created before the next hash in the sequence. Each time a node submits time data, a new output is computed based on the old- and current-time data to generate the next block.
The submission of time data triggers a validation smart contract where selected time nodes verifies the time data. Suppose a time node verifies the time data and more than two-thirds of the consensus nodes vote to confirm the submitted time data. In that case, the algorithm automatically increments the accuracy count for the time node that validated the given time data.
On the other hand, if a time node verifies the time data and consensus nodes vote to reject the validity of the submitted data, the protocol automatically decrements the accuracy count for the participating time node. The protocol uses the accuracy counts to determine the RS, i.e., the higher the accuracy count, the higher the relevancy score for the time node and vice versa.

3. Average weighted value of neighboring nodes

Many distributed platforms such as the internet of things (IoT), supply chains, and communication systems can be best described as networks, where nodes represent individuals and links denote interactions.
Faithful representations of such systems require that the links not only denote the existence of interactions but also associated weights that express interaction strengths. For example, in IoT-based networks, the weight of the links in a supply chain may represent resources or activities. Similarly, link weights may denote the number of flights or available seats in an airline network.
Because of this challenge, the Analog network introduces a third factor into the computation of the RS: the average weighted RS values for the neighboring nodes. Leveraging the average weighted value of the adjacent nodes helps the protocol achieve better prediction performance than relying on only the node’s tenure and historical data validation accuracy counts.
Time nodes do not carry the same weights when it comes to the RS. For example, if time node A validates time data and other nodes, say B, C, and D, finds this data to be useful, then a special relationship forms between A, B, C, and D. In other words, the RS associated with A increases on the network.
Suppose another time node, say X validates time data and only one node, say Y finds that time data to be valuable. The RS associated with X will also increase. However, A will have a higher weight than X because it has more relationships. Therefore, when computing the final RS for each time node, the network must factor in the connections between the given node and the vicinity nodes.

Ranking Score Algorithm

The RS algorithm works as follows:
Suppose time node A has other nodes, T1, T2, T3, …, Tn that surround it (i.e., neighboring nodes), and C(T1, T2, T3, …Tn) is the average tenure of the node as a participant on the Timechain i.e., nodes that have trust in node A as a time node. Then, RS(A), i.e., the ranking score for time node A in the network, can be computed as follows:
Frame 1160443543
RS(A) is the ranking score for time node A. This probability distribution represents the likelihood that node A will be selected as a time validator, so the sum of all the time validators’ RS will be 1. The algorithm iterates severally through the validators’ set to arrive at an accurate value for RS (A).
RS(T1, T2, T3, …Tn) is the ranking scores for the neighboring nodes (T1, T2, T3, …Tn).
C(T1, T2, T3, …Tn) is the average tenure for the vicinity nodes (T1, T2, T3, …Tn).
d is the damping factor. Because RS(A) is a probability distribution, a random time node that is selected to validate time data may be offline or not be ready to validate transactions. The damping factor denotes the probability that node A does not undertake validation, even after being selected. The value of d is exceptionally empirical, and, in our case, we will set it at 0.85.
The protocol recomputes RS values each time a new time data is submitted to the network. This means that as the number of time data increases on the platform, the initial approximation of RS decreases for all the time nodes. The figure below illustrates a high-level overview of this process:

Figure 3: High-level overview of time node selection process

Submitting Time Data

Broadcasters are a category of nodes on the Analog network that handles this role. Any node—whether time node, consensus node, or archive node—can become a broadcaster on the Analog network and submit time data to the marketplace. A broadcaster can submit its time data to the platform directly via a continuum smart contract-based DApp. In this case, the broadcaster hashes the time data and signs it with the private key before submitting it to the platform.
However, as a layer-0, Omnichain network, the Analog network can also allow DApps implemented in different Blockchains to seamlessly submit their time data, allowing such data to be used by other chains. For example, a Decentraland-based DApp can submit time data to the marketplace pipeline to be used by the Sandbox-powered DApp.
To achieve this, a DApp implemented in a different chain needs to leverage the Analog’s Timegraph API, which can privately read the block headers from one chain and transmit them to other chains. The steps below outline how a DApp on a different network, say Decentraland, can use the Analog marketplace’s pipeline to submit its time data to another chain such as the Sandbox:
  • The Decentraland-based DApp initiates the process of submitting time data to the marketplace’s pipeline by including the transaction’s unique ID, global ID, and payload (time data). It transmits this packet to the Analog Communicator.
  • The Analog Communicator transmits the packet to the Validator.
  • The Timegraph API reads the block header from the Decentraland-based DApp and the proof associated with time data on the Decentraland platform.
  • The Analog network uses the PoT consensus protocol to validate the submitted time data.
  • The Analog network sends the block hash specified by the global ID to the Validator on the Sandbox end.
  • The Validator then forwards the packet to the Communicator on the Sandbox end, which delivers the time data.

Validating Time Data

Time nodes are a category of nodes that validate time data transactions. In the Analog ecosystem, these transactions are cryptographically-signed instructions that change the status of the Timechain and trigger other events. Each transaction comprises the following fields:

Nonce, a pseudo-random number that the network uses as a counter during the consensus process.

To, a 160-bit address for the recipient or the continuum smart contract creation address.

Signature, determines the transaction sender (broadcaster) and validator (time node).

Time data, an unlimited-sized input byte array containing the time data.

A broadcaster initiates the process of submitting time data to the platform by signing the data with a private key. This ensures accountability of the broadcaster. It also allows the network to improve or degrade the broadcaster’s trust index based on how consumers compute their local trust indices.
Before submitting the time data to the network, the platform generates a transaction ID that any node can use to track its status. The time data is then broadcasted to all the nodes in the Analog network, which in turn selects the node with the highest RS to validate it.
The selected time node sequences time data transactions and orders them so that consensus nodes can efficiently process them. It then executes the time data transactions, publishes its signature, and generates a proposed block to be included on the Timechain. After validating the time data, the time node finally broadcasts the proposed block to the entire network as shown in figure 5.

PoT Consensus Workflow

Consensus nodes are a category of nodes that handle the Analog network’s consensus process. These nodes pick up the validated blocks of time data and vote to either accept them as part of the Timechain or reject them. The network uses random validator sub-sampling technique to select 1,000 consensus nodes that verify time data transactions.

Random Validator Sub-sampling

The network uses random sub-sampling as a repeated evaluation set or multiple holdouts to form a consensus committee. In the Analog network, a consensus committee is a group of 1,000 nodes that can either vote to accept or reject submitted validated time data.
To arrive at the figure, the network randomly divides the consensus set (a group of nodes that wish to participate in the consensus process) into fixed-sized subsets or sub-samples. The protocol then selects one(1) consensus node to serve on the consensus committee for each chosen subset.
This process continues until the network has generated 1,000 consensus nodes to verify time data transactions. With the random validator sub-sampling technique, any node in the network can be selected as a consensus node. This is because the consensus committee is never the same.
The random validator sub-sampling technique helps the network to prevent centralization problems such as those found in PoW or PoS protocols. This process takes approximately 1 microsecond (µs).

Consensus Process

To enhance speed, each consensus node has a Mempool that temporarily stores blocks of time data before they are picked for evaluation.
The following steps describe the consensus process when a time node submits a validated block of time data:
  • The network forms a consensus committee of 1,000 nodes.
  • Each consensus node in the consensus committee can either accept or reject the validated time data.
  • If at least two-thirds of the consensus nodes accept the time data as valid, the data is confirmed and appended to the Timechain. However, if more than a third of the consensus nodes reject it, the time data goes through consensus again for re-confirmation before it is removed.
  • The validated time data is appended to a hash algorithm whose output cannot be predicted without executing the function. Next, a new salt is randomly generated and concatenated to the hash function’s output as a one-way hashing function. In this case, salt adds security to the validated time data.
  • The hashing and salt encryption outcome is then appended to a zk-SNARK protocol to conceal any crucial data such as the addresses or values involved with the time data. This is then stored in the Timechain for future interactions with smart contracts. Figure 6 illustrates these processes:

Malicious Broadcasters

A malicious broadcaster can act in a byzantine manner and submit fake time data to the marketplace’s pipeline. To correct this anomaly, the Analog network has a thorough time data validation mechanism where the submitted time data has to undergo a validation mechanism followed by a consensus process where at least two-thirds of the consensus committee vote to accept it.
Even if the time data were to be finally confirmed, consumers of such data automatically devalue the broadcaster’s trust indices, negatively impacting their trustworthiness and overall earnings on the network.

Byzantine Time Nodes

If the time node validates fake time data and consensus nodes vote to reject it, the network automatically degrades the node’s RS. This minimizes the chances of such a time node being selected in the future to validate time data, which means losing out on the ANLOG token earnings.
There is the possibility that some nodes will act in a byzantine manner either due to downtime or validating fake time data. Suppose the PoT protocol designates a particular node to be a time node, and such node is offline or experiencing downtime issues. In that case, the network automatically selects the next node with the highest RS.


Subscribe to our newsletter

Get regular updates on Analog's latest news and announcements.