Endless
  • 🚀README
  • Discovery
    • 🚀Endless Web3 Genesis Cloud
    • 💎Business Model
    • 🎯Vision
    • ✈️Roadmap
    • 🪙Economics
    • 👤Team
      • Yu Xiong
      • Amit Kumar Jaiswal
      • Ned
      • 0xfun
      • Scott Trowbridge
      • Neeraj Sharma LLB
      • Amjad Suleman
      • Binu Paul
      • Eduard Romulus GOEAN
    • ❤️Developer Community
  • Endless Chain
    • Tech Docs
      • Account Address Format
      • Endless Account
      • Endless Coin(EDS)
      • Sponsored Transaction
      • On-Chain Multisig
      • Randomness
      • Safety Transaction
      • Token Locking & Distribution
    • Start
      • Learn about Endless
        • Accounts
        • Resources
        • Events
        • Transactions and States
        • Gas and Storage Fees
        • Computing Transaction Gas
        • Blocks
        • Staking
          • Delegated Staking
        • Governance
        • Endless Blockchain Deep Dive
          • Validator Nodes Overview
          • Fullnodes Overview
          • Node Networks and Synchronization
        • Move - A Web3 Language and Runtime
      • Explore Endless
      • Latest Endless Releases
      • Networks
    • Build
      • Tutorials
        • Your First Transaction
        • Your First Fungible Asset
        • Your First NFT
        • Your First Move Module
        • Your First Multisig
      • Learn the Move Language
        • The Move Book
          • Getting Started
            • Introduction
            • Modules and Scripts
          • Primitive Types
            • Move Tutorial
            • Integers
            • Bool
            • Address
            • Vector
            • Signer
            • References
            • Tuples and Unit
          • Basic Concepts
            • Local Variables and Scope
            • Equality
            • Abort and Assert
            • Conditionals
            • While, For, and Loop
            • Functions
            • Structs and Resources
            • Constants
            • Generics
            • Abilities
            • Uses and Aliases
            • Friends
            • Packages
            • Package Upgrades
            • Unit Tests
          • Global Storage
            • Global Storage - Structure
            • Global Storage - Operators
          • Reference
            • Libraries
            • Move Coding Conventions
        • Advanced Move Guides
          • Objects
            • Creating Objects
            • Configuring objects
            • Using objects
          • Move Scripts
            • Writing Move Scripts
            • Compiling Move Scripts
            • Running Move Scripts
            • Move Scripts Tutorial
          • Resource Accounts
          • Modules on Endless
          • Cryptography
          • Gas Profiling
          • Security
      • Endless Standards
        • Object
        • Endless Fungible Asset Standard
        • Endless Digital Asset Standard
        • Endless Wallet Standard
      • Endless APIs
        • Fullnode Rest API
        • Indexer Restful API
          • Indexer Installation
        • GRPC Transaction Stream
          • Running Locally
          • Custom Processors
            • End-to-End Tutorial
            • Parsing Transactions
          • Self-Hosted Transaction Stream Service
      • Endless SDKs
        • TypeScript SDK
          • Account
          • SDK Configuration
          • Fetch data from chain
          • Transaction Builder
          • HTTP Client
          • Move Types
          • Testing
          • Typescript
        • Rust SDK
        • Go SDK
      • Endless CLI
        • Install the Endless CLI
          • Install On Mac
          • Install On Alibaba Cloud
          • Install On Linux
          • Install On Windows
        • CLI Configuration
        • Use Endless CLI
          • Working With Move Contracts
            • Arguments in JSON Tutorial
          • Trying Things On-Chain
            • Look Up On-Chain Account Info
            • Create Test Accounts
          • Running A Local Network
            • Running a Public Network
          • Managing a Network Node
      • Integrate with Endless
        • Endless Token Overview
        • Application Integration Guide
      • Endless VSCode extension
      • Advanced Builder Guides
        • Develop Locally
          • Running a Local Network
          • Run a Localnet with Validator
    • Nodes
      • Learn about Nodes
      • Run a Validator and VFN
        • Node Requirements
        • Deploy Nodes
          • Using Docker
          • Using AWS
          • Using Azure
          • Using GCP
        • Connect Nodes
          • Connect to a Network
        • Verify Nodes
          • Node Health
          • Validator Leaderboard
      • Run a Public Fullnode
        • PFN Requirements
        • Deploy a PFN
          • Using Pre-compiled Binary
          • Using Docker
          • Using GCP 🚧 (under_construction)
        • Verify a PFN
        • Modify a PFN
          • Upgrade your PFN
          • Generate a PFN Identity
          • Customize PFN Networks
      • Bootstrap a Node
        • Bootstrap from a Snapshot
        • Bootstrap from a Backup
      • Configure a Node
        • State Synchronization
        • Data Pruning
        • Telemetry
        • Locating Node Files
          • Files For Mainnet
          • Files For Testnet
          • Files For Devnet
      • Monitor a Node
        • Node Inspection Service
        • Important Node Metrics
        • Node Health Checker
    • Reference
      • Endless Error Codes
      • Move Reference Documentation
      • Endless Glossary
    • FAQs
  • Endless Bridge
    • Intro to Endless Bridge
    • How to use bridge
    • Liquidity Management
    • Faucet
    • Developer Integration
      • Contract Integration
        • Message Contract
        • Execute Contract
      • Server-Side Integration
        • Message Sender
        • Example of Message Listener Service (Rust)
        • Example of Token Cross-Chain (JS)
  • Endless Wallet
    • User Guide
    • Basic Tutorial
    • FAQs
    • MultiAccount
    • SDK
      • Functions
      • Events
  • GameFi
    • Intro
    • GameFi & Endless
  • Endless Modules
    • Stacks
    • Storage
    • Module List
  • Endless Ecosystem
    • Intro
    • Show Cases
    • App Demo
  • Whitepaper
  • Endless SCAN
    • User Guide
  • MULTI-SIGNATURE
    • Multi-Signature User Guide
  • Regulations
    • Privacy Policy
    • Terms of Service
    • Funding Terms - Disclaimer
Powered by GitBook
On this page
  • Whitepaper
  • Endless Whitepaper
  • Enhancement of the Account model with native support for multi-signature
  • Enhancement of the ccount address
  • Consensus Model Based on Traffic Resources
  • Consensus Model Based on Storage Resources
  • Spnosored Transaction
  • Safety Transaction
  • Unified Transformation of FT Asset Standard
  • Unified Transformation of DA Asset Standard
  • Endless Indexer
  • Introduction of Token Locking Standard
Export as PDF

Whitepaper

Whitepaper

Endless Whitepaper

  • Enhancement of the Account model with native support for multi-signature

  • Enhancement of the ccount address

  • Consensus Model Based on Traffic Resources

  • Consensus Model Based on Storage Resources

  • Sponsored Transaction

  • Safety Transaction

  • Unified Transformation of FT Asset Standard

  • Unified Transformation of DA Asset Standard

  • Endless Indexer

  • Token Lock & Release Standard

Enhancement of the Account model with native support for multi-signature

On "Aptos" or "Sui", any account contains a 32-bytes authentication key(abbreviated as "auth_key") during account creation. The auth_key is derived from public key and authentication scheme.

for example, a "single signed" account with Ed25519 scheme:

  • auth_key = sha3_256(pub_key | 0x00)

  • the last entry, 0x00, represents the Ed25519 authentication schema

Aptos/Sui also support "multi-signed" account.

eg. a "multi-signed" account with 1-of-2 scheme of Aptos:

auth_key = sha3_256(0x2 | 0x01 | pub_key_0 | 0x01 | pub_key_1 | 0x00)

  • 0x2 represents total number of keys

  • first and second 0x01 represent authentication schema of each key, ie. Ed25519

  • the last entry, 0x00, represents the multisigned account schema

The multisigned account is specifically designed to work only with the On-Chain multisig Move module, i.e., "0x1::multisig_account" in Aptos.

One disadvantage of On-Chain multisig is its higher gas consumption and the need for more transaction rounds compared to Off-Chain solutions. Additionally, it is impossible to convert between a "single-signed" account and a "multi-signed" account.


On Endless, authentication key(auth_key) of any account is a set of address, which contains one or more account addresses; that means any account is a "single signed" account when auth_key contains only one address, or is a "multi-signed" account when auth_key contains more than one account address.

Endless Account also contains K-of-N multisig configuration.

Account structure is illustrated as below:

/// A simplified version of the Endless Account
pub struct AccountData {
    pub sequence_number: u64,
    pub authentication_key: Vec<AccountAddress>,
    pub num_signatures_required: u64,
}

Endless natively supports the corresponding authencation implementation which enables "Off-Chain" multisig.

Endless offers both CLI commands and Dapp(Endless Multisig Dapp) to manage multisig settings, such as adding and removing one or more accounts from auth_key set.

Enhancement of the ccount address

Major Move-based chains like "Aptos" and "Sui" use 32-byte account addresses. These are typically represented in hexadecimal format, such as 0x02a212de6a9dfa3a69e22387acfbafbb1a9e591bd9d636e7895dcfc8de05f331.

While this format is consistent, the content is not easily recognizable.

Endless uses Base58 encoding for its account addresses, resulting in shorter strings like 5SHvmLEaSr76dsKy4XLR5vMht14PRuLzJFx6svJzqorP.

This improves the readability of addresses and enables the use of "Vanity Addresses." The format is widely adopted across "Endless Explorer," "Endless CLI," and all Endless Dapps.

Consensus Model Based on Traffic Resources

Frontier Insights

In the current era of the surging digital wave, cloud computing has become the backbone supporting the stable operation of various applications and businesses. From daily mobile applications to the core business systems of large enterprises, cloud computing is ubiquitous, injecting continuous impetus into global innovation and development. Meanwhile, with the emergence of blockchain technology, Web3 applications have sprung up like mushrooms after rain. With their characteristics of decentralization, transparency, and security, they have attracted the attention of global developers and users, and also posed new challenges and demands on cloud computing services.

However, when the traditional cloud service model opens its doors to Web3 developers, it reveals many shortcomings. The access process is complex, requiring developers to have profound knowledge of cloud computing and rich practical experience, which undoubtedly raises the entry threshold for Web3 developers. In addition, the processes of purchasing cloud services, configuring the server environment, and deploying applications are cumbersome and time-consuming, seriously hindering the pace of Web3 innovation. More critically, the traffic billing model of traditional cloud services is highly centralized, lacking transparency and credibility, and it is difficult to meet the requirements of Web3 applications for a fair and trustworthy environment.

The Endless Traffic Billing Project has emerged to build a bridge connecting traditional cloud computing and the emerging Web3 world. Through an innovative traffic statistics and billing scheme, Endless Traffic Billing is committed to breaking through the numerous barriers of traditional cloud services, enabling Web3 developers to easily access cloud services while building a fair, transparent, and efficient traffic billing system. This white paper will deeply analyze the design concept, technical implementation details, and diverse application scenarios of the Endless blockchain traffic statistics function, and comprehensively showcase its great potential in promoting the deep integration of blockchain and cloud computing and leading the innovative development of the industry.

Era Background and Project Origins

The Current Landscape of Cloud Computing Services

Currently, industry giants such as Amazon AWS, Microsoft Azure, and Google Cloud dominate the global cloud computing market, providing comprehensive cloud services covering computing, storage, networking, etc. for countless enterprises and developers. These services have greatly promoted the digitalization process, reduced the IT costs of enterprises, and accelerated the pace of innovation.

However, when it comes to Web3 developers, the limitations of traditional cloud services are exposed. Web3 developers often focus on the innovation and application of blockchain technology and feel overwhelmed by the complex cloud service configuration and deployment processes. For example, configuring a server environment suitable for blockchain application operation requires considering multiple aspects such as network security, node configuration, and data storage. A slight mistake may lead to unstable application operation or security vulnerabilities. In addition, the process of purchasing cloud services is cumbersome, requiring multiple rounds of communication and negotiation with cloud service providers. This is undoubtedly a huge consumption of time and energy for Web3 developers who pursue rapid iteration and innovation.

The Urgent Needs of Web3 Development

Web3 applications, with their decentralized architecture, highly transparent mechanism, and strong security performance, are gradually changing the pattern of the Internet. From decentralized finance (DeFi) to non-fungible tokens (NFTs), Web3 applications cover multiple fields such as finance, art, and games, attracting a large number of users and funds.

However, the development of these innovative applications cannot be separated from reliable cloud service support. The centralized characteristics of traditional cloud services run counter to the decentralized concept of Web3 and are difficult to meet the strict requirements of Web3 applications for data security, privacy protection, and fair billing. In addition, the rapid development of Web3 applications also requires a more flexible and efficient traffic billing method to adapt to their diverse business scenarios and rapidly changing user needs.

The Birth of the Endless Traffic Billing Project

The Endless blockchain project was born to solve the above pain points. Our goal is to break the shackles of traditional cloud services and build an open, efficient, and secure ecosystem, enabling Web3 developers to easily access cloud services and focus on the development of innovative applications.

By implementing accurate traffic statistics on the client side and combining blockchain smart contracts for automated settlement, Endless achieves the efficiency, security, and fairness of cloud service usage. This innovative model not only simplifies the workflow of Web3 developers but also creates a more fair and transparent cooperation environment for cloud service providers and users.

Core Technology Analysis

Exploring the BLS12-381 Algorithm

The BLS12-381 algorithm is one of the core technologies of the Endless blockchain traffic statistics function. It is based on the Barreto - Naehrig (BN) curve and has excellent signing and aggregation capabilities, occupying an important position in the field of cryptography.

In the mathematical system of BLS12-381, the definition of the elliptic curve is the cornerstone. On the finite field, the equation of the BLS12-381 curve is concise and elegant: y2=x3+by^2 = x^3 + by2=x3+b.

Among them, ppp is a carefully selected large prime number, which provides a solid security guarantee for the entire encryption system; bbb is the key parameter of the curve, determining the specific shape and properties of the curve.

On this elliptic curve, the addition and multiplication operations of points follow strict and sophisticated rules. When we perform an addition operation on two different points P(x1,y1)P(x_1,y_1)P(x1​,y1​) and Q(x2,y2)Q(x_2,y_2)Q(x2​,y2​) on the elliptic curve, we first need to calculate a key slope parameter λ=y2−y1x2−x1\lambda = \frac{y_2 - y_1}{x_2 - x_1}λ=x2​−x1​y2​−y1​​, and then through a series of mathematical derivations, obtain the coordinates of their sum R=P+QR = P + QR=P+Q, where x3=λ2−x1−x2x_3 = \lambda^2 - x_1 - x_2x3​=λ2−x1​−x2​ and y3=λ(x1−x3)−y1y_3 = \lambda(x_1 - x_3) - y_1y3​=λ(x1​−x3​)−y1​.

When the two points are the same, the operation rules are slightly different. λ=3x122y1\lambda = \frac{3x_1^2}{2y_1}λ=2y1​3x12​​, x3=λ2−2x1x_3 = \lambda^2 - 2x_1x3​=λ2−2x1​, and y3=λ(x1−x3)−y1y_3 = \lambda(x_1 - x_3) - y_1y3​=λ(x1​−x3​)−y1​. The multiplication operation of points is achieved through multiple addition operations. For a point PPP and an integer nnn, their product can be regarded as the result of nnn additions, that is, nP=P+P+⋯+PnP = P + P + \cdots + PnP=P+P+⋯+P (nnn additions).

This multiplication operation method based on addition not only ensures mathematical rigor but also lays the foundation for subsequent signing and aggregation operations.

Analysis of the BLS12-381 Signature Mechanism

The signing process of BLS12-381 is rigorous and secure, providing strong guarantees for the authenticity and integrity of traffic data. First, the user needs to randomly select a private key in the integer ring ZpZ_{p}Zp​ modulo ppp. This private key is like the user's digital identity password and is the core secret of the entire signing process. Based on the private key, the user can generate the corresponding public key pk=sk⋅Gpk = sk\cdot Gpk=sk⋅G, where GGG is a fixed base point on the elliptic curve. The public key is like the user's digital business card and can be publicly shared for verifying the authenticity of the signature. When the user needs to sign the message mmm, they will first use a secure hash function HHH to calculate the hash value of the message h=H(m)h = H(m)h=H(m). The hash function is like a magical fingerprint generator. No matter the size and content of the message, it will generate a unique fixed-length hash value. Then, the user uses the private key to sign the hash value and obtains the signature σ=sk⋅h\sigma = sk\cdot hσ=sk⋅h. When verifying the signature, the verifier first calculates the hash value of the message h=H(m)h = H(m)h=H(m) and then verifies whether the equation pkσ=hGpk\sigma = hGpkσ=hG holds. If the equation holds, it is like a key accurately opening the corresponding lock, indicating that the signature is valid and the message has not been tampered with during transmission; otherwise, it indicates that the signature is invalid and the message may be at risk.

Unveiling the Principle of Efficient Aggregation in BLS12 - 381

The efficient aggregation capability of BLS12 - 381 is a significant highlight in the Endless traffic statistics function. It can greatly enhance the system's operational efficiency and data - processing capacity.

Suppose there are nnn different messages m1,m2,⋯ ,mnm_1,m_2,\cdots,m_nm1​,m2​,⋯,mn​. Each message has a corresponding signature σ1,σ2,⋯ ,σn\sigma_1,\sigma_2,\cdots,\sigma_nσ1​,σ2​,⋯,σn​ and public key pk1,pk2,⋯ ,pknpk_1,pk_2,\cdots,pk_npk1​,pk2​,⋯,pkn​.

In the stage of generating the aggregated signature, we only need to simply sum up all the signatures to obtain the aggregated signature σagg=∑i=1nσi\sigma_{agg} = \sum_{i = 1}^{n} \sigma_iσagg​=∑i=1n​σi​.

In the verification stage, the verifier needs to calculate the hash value of each message hi=H(mi),i=1,2,⋯ ,nh_i = H(m_i), i = 1,2,\cdots,nhi​=H(mi​),i=1,2,⋯,n, and then verify whether the equation ∑i=1npkiσagg=∑i=1nhiG\sum_{i = 1}^{n} pk_i\sigma_{agg} = \sum_{i = 1}^{n} h_{i}G∑i=1n​pki​σagg​=∑i=1n​hi​G holds.

If the equation holds, it proves that all the signatures are valid, and the messages have maintained integrity during transmission and aggregation.

Through this efficient aggregation method, BLS12 - 381 can aggregate multiple signatures into a concise one, significantly reducing the computational effort for signature verification and the amount of data transmission. This not only improves the system's operational efficiency but also reduces the consumption of network bandwidth, enabling the Endless traffic statistics function to operate stably and efficiently in large - scale application scenarios.

A Panoramic View of the System Architecture

Client - side SDK: The Vanguard for Data Collection

The client - side SDK is a crucial front - end component of the Endless traffic statistics function. It acts like a vigilant scout, monitoring and collecting the client's network traffic data in real - time.

One of its core functions is accurate traffic statistics. Whether it's the information flow during data upload or the data stream during resource download, the client - side SDK can capture and accurately count them in real - time, providing a reliable data foundation for subsequent billing and analysis.

In terms of private key management, the client - side SDK adopts a self - developed Rust library and advanced obfuscation compilation technology. The Rust programming language is renowned for its excellent memory safety and concurrency, providing a solid guarantee for the secure storage and use of private keys.

The obfuscation compilation technology is like an invisible armor for the private key, further increasing the difficulty of cracking the private key, effectively preventing private key leakage, and protecting users' data security.

After the traffic data is collected, the client - side SDK uses the BLS12 - 381 algorithm to sign the data. This is like attaching an unforgeable digital label to the data, ensuring the integrity and authenticity of the data during transmission.

Finally, the client - side SDK submits the signed traffic data to the "Signature Network" at pre - determined time intervals, preparing for subsequent processing and verification.

Signature Network: The Hub for Data Verification and Aggregation

The Signature Network is a powerful network composed of numerous distributed nodes. It plays a central role in data verification, storage, and aggregation in the Endless traffic statistics function.

When the Signature Network receives the traffic data submitted by the client - side SDK, it first conducts strict BLS12 - 381 signature verification. Only the data that passes the verification is regarded as trustworthy data, stored in the network, and synchronized to other nodes.

This distributed storage and synchronization mechanism not only ensures the security and reliability of the data but also facilitates data trace - back and auditing.

Another important function of the Signature Network is traffic aggregation. It performs BLS aggregation on the traffic data according to established rules. Through ingenious mathematical operations, it aggregates the signatures of multiple traffic data into a concise aggregated signature, thus significantly reducing the amount of data submitted to the Endless blockchain.

This not only reduces the load on the blockchain, improves the blockchain's operational efficiency but also saves valuable network bandwidth resources.

In addition, the Signature Network provides convenient API interfaces for cloud service providers. Cloud service providers can easily retrieve the traffic data of the previous day and earlier through these APIs, providing data support for subsequent settlement and business analysis.

Blockchain Smart Contract: The Impartial Arbiter for Automatic Settlement

The smart contract is a key component for achieving automatic settlement in the Endless blockchain traffic statistics function. It is like an impartial arbiter, ensuring the accurate, fair, and transparent settlement of traffic fees.

When the cloud service provider submits the aggregated traffic data obtained from the Signature Network to the blockchain smart contract, the smart contract first conducts strict BLS signature verification. Only the data that passes the verification is recognized as valid data and enters the subsequent settlement process.

Once the signature verification is passed, the smart contract automatically calculates and settles the traffic fees to the cloud service provider according to the pre - set rules. The entire settlement process requires no manual intervention and is completely executed automatically by the smart contract according to the code logic.

This not only improves the efficiency and accuracy of settlement but also avoids errors and fraud that may be caused by human factors, ensuring the fairness and transparency of settlement.

Remarkable Advantages and Features

Low - Threshold Access: A Convenient Path for Web3 Developers

For Web3 developers, the Endless traffic statistics function provides a convenient channel to access cloud services. Developers only need to simply integrate the corresponding SDK to easily access the Endless ecosystem and utilize a rich variety of cloud service resources. There is no need to spend a great deal of time and effort learning complex cloud service configuration and deployment knowledge, nor to engage in cumbersome communication and negotiation with cloud service providers. This enables Web3 developers to focus more on application innovation and business expansion, accelerating the development and launch process of Web3 applications.

Ultimate Security Assurance: A Solid Fortress for Private Key Security

Through the self - developed Rust library and advanced obfuscation compilation technology, Endless provides comprehensive protection for the private key security of Web3 developers. The memory safety and concurrency of the Rust library effectively prevent the risk of private key leakage caused by memory vulnerabilities. The obfuscation compilation technology encrypts and transforms the code, making it difficult for attackers to analyze and crack the logic of private key storage and usage. This dual - protection mechanism builds a solid fortress for the private key security of Web3 developers, allowing them to develop and deploy applications with peace of mind. Developers can use the obfuscated compilation of Endless' open - source Rust Lib to create a dedicated SDK, which not only protects the developer's keys but also prevents the abuse of the SDK.

High - Efficiency Performance: A High - Speed Engine for Data Processing

The efficient aggregation and signature capabilities of the BLS12 - 381 algorithm enable the Endless blockchain traffic statistics function to perform excellently in data processing. By aggregating multiple signatures into one, it significantly reduces the computational load of signature verification and the amount of data transmission. This not only improves the system's operational efficiency but also reduces network bandwidth consumption, allowing Endless to operate quickly and stably in large - scale application scenarios. Whether in high - concurrency traffic statistics scenarios or complex signature verification and aggregation processes, the signature network can demonstrate outstanding performance, providing users with an efficient and smooth experience.

Automatic Contract Settlement: An Intelligent Guardian for Fair Billing

The automatic settlement function of blockchain smart contracts brings unprecedented fairness and transparency to cloud service traffic billing. Cloud service providers only need to submit the aggregated traffic data to the blockchain contract, and the smart contract will automatically conduct signature verification and settlement. The entire process requires no manual intervention and is executed completely according to pre - set rules and code logic. This not only avoids billing errors and fraudulent behaviors that may be caused by human factors but also improves the efficiency and accuracy of settlement. Users can view settlement records in real - time to ensure that every charge is clear and transparent, truly achieving fair billing and worry - free usage.

Data Traceability: A Reliable Basis for Auditing and Querying

The distributed storage and synchronization mechanism of the signature network makes the traffic data highly traceable. All historical data is securely stored in the network, allowing users and regulatory authorities to conduct historical verification and aggregation tracing at any time. This provides a reliable basis for auditing work. Whether it is an internal audit or an external regulatory audit, the required data can be easily obtained to ensure business compliance and data authenticity. Meanwhile, the data traceability also provides convenience for users, who can query historical traffic data at any time for business analysis and optimization.

Diverse Application Scenarios

Web3 Application Development: A Catalyst for Accelerated Innovation

In the field of Web3 application development, the Endless traffic statistics function offers developers a one - stop cloud service solution. Developers can quickly access the required cloud service resources through a simple SDK integration and focus on the core function development and innovation of their applications. Whether it's decentralized finance applications, NFT trading platforms, or other Web3 innovative applications, Endless can provide them with stable and efficient cloud service support, accelerating the development process of Web3 applications, reducing development costs, and promoting the rapid iteration and innovative development of Web3 applications.

Cloud Service Providers: New Opportunities for Business Expansion

For cloud service providers, joining the Endless ecosystem means new opportunities for business expansion. By collaborating with Endless, cloud service providers can offer customized cloud services to Web3 developers and achieve automatic settlement of traffic fees through blockchain smart contracts. This not only improves business efficiency but also increases revenue sources. Meanwhile, Endless's innovative model brings new technologies and concepts to cloud service providers, helping them enhance their market competitiveness and expand their business

Consensus Model Based on Storage Resources

Technical Solution

Proof of Storage:The storage provider PPP provides proof ProofProofProof that they have reliably stored data for the user UUU over a specific period of time TTT.

The entire process illustrated as below:

The detailed procedure consists of the following steps:

0.0 Endless KZG Polynomial Proof

Let g\text{g}g be a group G\mathbb{G}G element, and denote [a]=a⋅g[a] = a\cdot g[a]=a⋅g a group element where a\text{a}a is an integer.

Let sss be a secret, then a universal setup of degree mmm consists of mmm elements of G\mathbb{G}G:

[s],[s2],…,[sm].[s], [s^2], \ldots, [s^m].[s],[s2],…,[sm].

Let f(X)=∑0≤i≤mfiXif(X) = \sum_{0\leq i \leq m}f_i X^if(X)=∑0≤i≤m​fi​Xi be a polynomial of degree mmm. Then a commitment Cf∈GC_f\in \mathbb{G}Cf​∈G is defined as

Cf=∑0≤i≤mfi[si],C_f = \sum_{0\leq i \leq m} f_i[s^i],Cf​=0≤i≤m∑​fi​[si],

being effectively an evaluation of fff at point sss.

Note that for any yyy we have that (X−y)(X-y)(X−y) divides f(X)−f(y)f(X) - f(y)f(X)−f(y). Then the proof that f(y)=zf(y) = zf(y)=z is defined as

π[f(y)=z]=CT,\pi[f(y)=z] = C_T,π[f(y)=z]=CT​,

where Ty(X)=f(X)−zX−yT_y(X) = \frac{f(X)-z}{X-y}Ty​(X)=X−yf(X)−z​ is a polynomial of degree (m−1)(m-1)(m−1).

Note that a proof can be constructed using mmm scalar multiplications in the group. The coefficients of TTT are computed with one multiplication each:

Ty(X)=∑0≤i≤m−1tiXi,tm−1=fm,tj=fj+1+y⋅tj+1.\begin{aligned} T_y(X) &= \sum_{0\leq i \leq m-1}t_i X^i ,\\ t_{m-1} &= f_m ,\\ t_j &= f_{j+1}+y\cdot t_{j+1} . \end{aligned}Ty​(X)tm−1​tj​​=0≤i≤m−1∑​ti​Xi,=fm​,=fj+1​+y⋅tj+1​.​

Expanding on the last equation, we get

Ty(X)=fmXm−1+(fm−1+yfm)Xm−2+(fm−2+yfm−1+y2fm)Xm−3++(fm−3+yfm−2+y2fm−1+y3)Xm−4+⋯+(f1+yf2+y2f3+⋯+ym−1fm).\begin{aligned} T_y(X) = & f_mX^{m-1} + (f_{m-1}+yf_{m})X^{m-2} + (f_{m-2}+yf_{m-1}+y^2f_m)X^{m-3} +\\+ & (f_{m-3}+yf_{m-2}+y^2f_{m-1}+y^3)X^{m-4}+\cdots + (f_{1}+yf_{2}+y^2f_3+\cdots+y^{m-1}f_m). \end{aligned}Ty​(X)=+​fm​Xm−1+(fm−1​+yfm​)Xm−2+(fm−2​+yfm−1​+y2fm​)Xm−3+(fm−3​+yfm−2​+y2fm−1​+y3)Xm−4+⋯+(f1​+yf2​+y2f3​+⋯+ym−1fm​).​

Let ψ∈Fp\psi\in\mathbb{F}_pψ∈Fp​ be an ℓ\ellℓ-th root of unity (ψℓ=1\psi^\ell=1ψℓ=1). Let's say we want to reveal the polynomial evaluations f(y)=z0f(y) = z_0f(y)=z0​, f(ψy)=z1f(\psi y) = z_1f(ψy)=z1​, …\ldots…, f(ψℓ−1y)=zℓ−1f(\psi^{\ell-1} y)=z_{\ell - 1}f(ψℓ−1y)=zℓ−1​.

Noting that (x−y)⋅(x−ψy)⋯(x−ψℓ−1y)=xℓ−yℓ(x-y)\cdot(x-\psi y) \cdots (x - \psi^{\ell-1} y) = x^\ell - y^\ell(x−y)⋅(x−ψy)⋯(x−ψℓ−1y)=xℓ−yℓ, the proof for this can be given by computing the polynomial

g(x)=f(x)//(xℓ−yℓ),\begin{equation} g(x) = f(x) // (x^\ell - y^\ell)\text{,} \end{equation}g(x)=f(x)//(xℓ−yℓ),​​

where ////// stands for the truncated long division, and then computing the proof

π[f(y)=z0,…,f(ψℓ−1y)=zℓ−1]=[g(s)].\begin{equation} \pi[f(y) = z_0, \ldots, f(\psi^{\ell-1}y)=z_{\ell - 1}] = [g(s)] \text{.} \end{equation}π[f(y)=z0​,…,f(ψℓ−1y)=zℓ−1​]=[g(s)].​​

This proof can be verified by computing the checking polynomial h(x)=f(x)mod  (xℓ−yℓ)h(x) = f(x) \mod (x^\ell - y^\ell)h(x)=f(x)mod(xℓ−yℓ) (which can be interpolated from the given values), and checking that

e(Cf,⋅)=e(π[f(y)=z0,…,f(ψℓ−1y)=zℓ−1],[sℓ−yℓ])e(h(s),⋅).\begin{equation} e(C_f, \cdot) = e(\pi[f(y) = z_0, \ldots, f(\psi^{\ell-1}y)=z_{\ell - 1}], [s^\ell - y^\ell]) e(h(s),\cdot) \text{.} \end{equation}e(Cf​,⋅)=e(π[f(y)=z0​,…,f(ψℓ−1y)=zℓ−1​],[sℓ−yℓ])e(h(s),⋅).​​

0.1 File Storage Committment

User UUU uploads files to the storage provider PPP, who generates a daily CommitmentCommitmentCommitment for the user's newly uploaded files and submits it on-chain.

Let the number of files uploaded by the user on the given day be KKK. PPP splits each file fi (i∈[0,K))f_i \ (i \in [0, K))fi​ (i∈[0,K)) into nin_ini​ segments:Segmentsi=[seg0,seg1,⋯ ,segni−1]Segments_{i} =[seg_{0},seg_{1},\cdots , seg_{n_{i}−1}]Segmentsi​=[seg0​,seg1​,⋯,segni​−1​]

For SegmentsiSegments_iSegmentsi​, calculate the Merkle tree root:ri=MerkelRoot([seg0,seg1,⋯ ,segni−1])r_{i} =MerkelRoot([seg_{0},seg_{1},\cdots , seg_{n_{i}−1}])ri​=MerkelRoot([seg0​,seg1​,⋯,segni​−1​])

The metadata [Metadata]i[Metadata]_i[Metadata]i​ of file FiF_iFi​ includes:

  • The storage provider's account address: addrserveraddr_{server}addrserver​

  • The Merkle tree root of the file: rir_iri​

  • The file index when computing CommitmentCommitmentCommitment: iii

  • The user's cumulative uploaded byte size: AccumlationByteSizeAccumlationByteSizeAccumlationByteSize

  • The number of Merkle tree leaf nodes: nin_ini​

[Metadata]i=[addrserver,ri,i,AccumlationByteSize,SegmentCount][Metadata]_{i} =[addr_{server},r_{i},i,AccumlationByteSize,SegmentCount][Metadata]i​=[addrserver​,ri​,i,AccumlationByteSize,SegmentCount]

The storage provider signs the metadata [Metadata]i[Metadata]_i[Metadata]i​ for file fif_ifi​, generating the signature sigisig_isigi​:sigi=Signature[ed25519]([Metadata]i)sig_{i} = Signature_{[ed25519]} ([Metadata]_{i})sigi​=Signature[ed25519]​([Metadata]i​)

PPP creates signatures for UUU's KKK files for the day, represented as [sig0,sig1,…,sigK−1][sig_0, sig_1, \dots, sig_{K-1}][sig0​,sig1​,…,sigK−1​], and computes the CommitmentCommitmentCommitment:

Commitment=Fk20([sig0,sig1,⋯ ,sigK−1])Commitment=Fk20([sig_{0},sig_{1}, \cdots ,sig_{K−1}])Commitment=Fk20([sig0​,sig1​,⋯,sigK−1​])

CommitmentCommitmentCommitment is the daily file upload commitment for UUU. Therefore, PPP must submit the CommitmentCommitmentCommitment for newly uploaded files daily.

0.2 On-Chain Random Challenge Generation

ChallengeChallengeChallenge is generated via an on-chain interface and serves as the challenge for a specific CommitmentCommitmentCommitment. The input is the CommitmentCommitmentCommitment, and the output is the random numbers [rf,rs][r_f, r_s][rf​,rs​]:

Let this CommitmentCommitmentCommitment represent PPP's storage commitment for UUU's KKK files on a given day. Then rf∈[0,K)r_f \in [0, K)rf​∈[0,K) specifies the position of the file metadata signature in the CommitmentCommitmentCommitment. Let the randomly selected file be divided into NNN segments. Then pos=[rs]pos = [r_s] % Npos=[rs​] indicates the position of the data segment segmentsegmentsegment.

0.3 Storage Provider Generates KZG Proof

PPP computes the MerkelProofMerkelProofMerkelProof and Fk20ProofFk20ProofFk20Proof based on the challenge random numbers [rf,rs][r_f, r_s][rf​,rs​]:

  1. Generate the Merkle proof:ProofMerkel=[RawDatapos,MerkelPath]Proof_{Merkel} =[RawData_{pos} ,MerkelPath]ProofMerkel​=[RawDatapos​,MerkelPath]

  2. Generate the Fk20ProofFk20ProofFk20Proof for the rfr_frf​-th file SIGiSIG_iSIGi​:

ProofFk20=Prove(Commitment,rf)Proof_{Fk20} = Prove(Commitment,r_{f})ProofFk20​=Prove(Commitment,rf​)

  1. Upload ProofMerkelProof_{Merkel}ProofMerkel​, ProofFk20Proof_{Fk20}ProofFk20​, and MetadataiMetadata_iMetadatai​ on-chain for verification.

0.4 On-Chain Verification

The on-chain contract executes the verification of ProofMerkelProof_{Merkel}ProofMerkel​ and ProofFk20Proof_{Fk20}ProofFk20​:

isPass=Verifymerkel([Proofmerkel])∧VerifyFk20([ProofFk20])isPass=Verify_{merkel}([Proof_{merkel}]) ∧ Verify_{Fk20}([Proof_{Fk20}])isPass=Verifymerkel​([Proofmerkel​])∧VerifyFk20​([ProofFk20​])

Based on the verification result isPassisPassisPass, it is determined whether the file has been stored effectively. If falsefalsefalse, it is marked as invalid storage. The result is recorded on-chain, and penalties are imposed on the storage provider (see [Rules](## 0.5 Fee and Penalty)).

0.5 Fee and Penalty

  • Treasury: Allocates on-chain subsidies to cover the storage provider's GasFee.

  • Configurable Pricing: The price can be configured as (Byte/KB/MB/GB)/day, with updates allowed at a minimum interval of 1 month.

  • Storage Provider Fees: Charged daily. The fee is deposited into the provider's account and frozen, with withdrawal allowed only after the 7-day challenge period ends.

  • Minimum Balance: The storage provider's account balance must be at least MIN_BALANCE (10000EDS) to withdraw funds.

  • Penalties:

    • Package Challenge Failure: For a failed package challenge (files within the same commitment), fees for the previous 7 days are refunded to the user. A triple penalty is imposed (1x + 1EDS to the user, 2x + 1EDS to the challenger).

    • Failure to Upload Deletion Records: If the storage provider fails to upload deletion records for files in the package, the fees for the previous 7 days are refunded to the user, and a 10x penalty is imposed (5x + 2EDS to the user, 5x + 2EDS to the challenger).

  • GasFee Subsidy: If no challenges fail within 14 days, the GasFee for challenges completed 7 days prior is subsidized.

  • Challenge Limits:

    • Any party: Maximum of 10 challenges/1GB/day+2log⁡2(file count)/day10 \text{ challenges/1GB/day} + 2\log_2(\text{file count})/\text{day}10 challenges/1GB/day+2log2​(file count)/day

    • User: Maximum of 1 challenge/file/day1 \text{ challenge/file/day}1 challenge/file/day

When Fees Cannot Be Collected from Users: If user fees cannot be collected, the storage provider may delete all user files, and the corresponding storage records will also be removed from the contract.​

Spnosored Transaction

In blockchain systems, users are typically required to pay transaction fees (gas fees) when making transactions.

For new users, developers, or specific decentralized applications (dApps), these transaction fees can act as a barrier to adoption and engagement.

Sponsored Transactions, where third parties cover the transaction fees on behalf of users, simplify the onboarding process and enhance the overall user experience.

Mainstream chains like Aptos allow third parties to set up payment services, such as "gas-station services," to handle transaction fees for users.

However, one disadvantage of such gas-station services is their centralized nature: if the gas-station server goes down, the entire service becomes unavailable.

Endless offers On-Chain Sponsored Transactions, which operate in a decentralized manner. When a Move module implements a sponsored function, any transaction invoking that function will have its gas fees deducted directly from the Move module's account.

Developers implementing the sponsored function should carefully consider:

  • Setting up a whitelist or blacklist of users who are allowed or disallowed to invoke the function.

  • Properly managing the funding of the Move module to ensure sufficient gas fees are available.

The entire sponsored transaction process is fully On-Chain and decentralized, providing a more robust and fault-tolerant solution compared to centralized alternatives.

Safety Transaction

Blockchain technology has garnered significant attention due to its decentralized, transparent, and secure nature. However, smart contracts built on blockchains are not without flaws. Vulnerabilities in smart contracts, especially those exploited maliciously, pose a major threat to the blockchain ecosystem and have given rise to various scams.

Smart contracts are essentially programs running on the blockchain, and their code becomes difficult to modify once deployed. This means that any vulnerabilities in the code can be exploited by malicious attackers, leading to substantial financial losses or even systemic risks. Common contract vulnerabilities include arithmetic overflow/underflow, reentrancy attacks, access control issues, oracle manipulation, and logical errors. The commonality among these vulnerabilities is their potential for exploitation, enabling attackers to take control of the contract, resulting in financial loss or other adverse outcomes.

Malicious contracts leverage these vulnerabilities to execute scams. Attackers craft contract code with deliberate vulnerabilities and deploy these contracts on the blockchain. These contracts may masquerade as legitimate investment projects or games, luring users to participate, only to exploit the vulnerabilities to steal users' funds or assets. For instance, an investment contract that appears secure might contain a reentrancy vulnerability, allowing attackers to drain the contract's funds through repeated withdrawals.

Most mainstream wallet software typically provides a "transaction preview" feature. When users submit a transaction— whether it is transferring funds to another user or interacting with a DApp by invoking external contract functions. the wallet calls the blockchain node's RPC method to simulate the transaction execution and retrieve the simulation results. These results include gas estimation and a preview of transaction outcomes.

The transaction preview displays potential changes in funds for the user account and other accounts involved in the transaction. However, it is important to note that the transaction preview results are not identical to the actual transaction execution results.

Attackers can exploit the discrepancy between transaction previews and actual transaction execution to deceive and attack users. In simple terms, they design malicious contracts that follow Path A during the preview phase, which aligns with the user's expectations, but switch to Path B during actual execution, causing users to incur losses.

To address such attacks, Endless has introduced a "Strong Security Mode Validation" for transactions. This feature ensures that the estimated asset changes displayed in the transaction preview align with the actual transaction execution results.

For example, when interacting with a malicious contract, the transaction preview might follow Path A. With the transaction's "Safety Switch" enabled, if the actual execution takes Path B* and the resulting asset changes deviate from the previewed results, the transaction will fail. This mechanism prevents users from incurring financial losses.

Unified Transformation of FT Asset Standard

On the Endless chain, all fungible tokens (FTs) adopt the unified FungibleAsset Standard, providing a consistent interface for all fungible tokens and significantly reducing the development burden for developers.

The FungibleAsset Standard* defines the attributes and API interfaces for all fungible tokens. Using development tools such as the Endless CLI or the TypeScript SDK, it is easy to create custom fungible tokens on the Endless chain.

Unified Transformation of DA Asset Standard

Similar to fungible tokens, the Endless chain also adopts the DigitalAsset Standard for NFT asset types, unifying their management and usage.

In addition to minting NFTs through the TypeScript SDK and other tools, the Endless CLI provides an additional nft subcommand to simplify the NFT minting process. The nft subcommand also introduces the Soul Bound feature, enabling users to mint non-transferable NFTs.

Endless Indexer

Endless Indexer, as a critical auxiliary service for the Endless blockchain, provides various APIs for querying address transaction history, Coin details, and NFT details.

As a high-performance blockchain, Endless generates substantial data volume with rapid growth rates, posing significant implementation challenges for the Indexer.

Endless Indexer employs RocksDB for data storage, utilizing metadata processing and chain-based hooks to index transaction data from the Endless blockchain. For transaction data acquisition, the Indexer adopts two modes:

  1. Local Environment: When the Indexer and Endless Full Node reside in the same local environment, transactions are synchronized via Unix Domain Socket.

  2. Remote Environment: gRPC serves as the transaction data transmission protocol.

Performance Comparison

Compared with Aptos Indexer, Endless Indexer demonstrates notable advantages in synchronization speed, query performance, and database size efficiency.

1. Synchronization Speed

Synchronization speed depends on three factors: transaction fetching speed, processing efficiency, and database write throughput.

In scenarios with rapidly increasing data volumes, database write performance becomes the critical determinant of overall system efficiency. The fundamental differences in write mechanisms between Postgres (used by Aptos Indexer) and RocksDB (adopted by Endless Indexer) lead to significant performance divergences in large-scale data environments.

Postgres Write Performance

As a relational database, Postgres exhibits progressively degraded write performance with increasing data volume due to:

  1. Transaction log (WAL) and index updates: Each write operation triggers WAL writes and index updates, with escalating costs as indices grow.

  2. Table bloat: Frequent writes/updates cause table bloat, necessitating periodic VACUUM and ANALYZE operations.

  3. Lock contention: High-concurrency write scenarios suffer from table/row-level lock contention.

RocksDB Write Performance

RocksDB, an LSM (Log-Structured Merge Tree)-based key-value store, optimizes for large-scale data and high-throughput writes through:

  1. Sequential write optimization: Batches writes in MemTable before flushing to disk.

  2. Tiered storage: Manages data through multi-level compaction, minimizing random I/O.

  3. Write amplification control: Implements efficient compaction strategies to maintain stable performance.

Write Speed Comparison

The following table illustrates write performance trends under different data scales:

Data Scale
Postgres Write Speed Degradation
RocksDB Write Speed Degradation

Small-scale

Fast

Fast

Medium-scale

Significant decline

Mild decline

Large-scale

Substantial decline

Stable

Extreme-scale

Severe decline (requires optimization/sharding)

High efficiency maintained

Mathematical representations of write speed $S$ versus data volume $D$:

  • Postgres: The write speed $S_P$ decreases as the data volume $D$ increases and can be approximated as:

SP(D)=CDα,α>1S_P(D) = \frac{C}{D^\alpha}, \quad \alpha > 1SP​(D)=DαC​,α>1

where $C$ is a constant, and $\alpha$ represents the rate at which write performance degrades.

  • RocksDB: The write speed $S_R$ decreases as the data volume $D$ increases and can be approximated as:

    SR(D)=Clog⁡(D+1),logarithmic declineS_R(D) = \frac{C}{\log(D+1)}, \quad \text{logarithmic decline}SR​(D)=log(D+1)C​,logarithmic decline

During high TPS periods, Endless Indexer maintains real-time synchronization, while Aptos Indexer may lag behind full nodes by hours.

2. Query Performance

The following table shows the time comparison for querying the total transaction count of address 0x1 in a local environment (to exclude factors like network latency):

Query Target
Aptos Indexer (seconds)
Endless Indexer (seconds)

Total TXs of 0x1

60.76759320

0.00051460

Since Aptos Indexer uses PostgreSQL as its database, query latency increases linearly with transaction volume. In contrast, Endless Indexer leverages RocksDB, whose LSM (Log-Structured Merge Tree) architecture ensures only a marginal degradation in query speed.

3. Storage Efficiency

Disk usage comparison for indexing identical blockchain height:

Metric
Aptos Indexer (MB)
Endless Indexer (MB)

Storage Consumption

61400

722

Endless Indexer reduces storage footprint by 99% compared to Aptos Indexer.

Limitations

Despite its performance advantages, Endless Indexer presents certain limitations:

  1. Limited Flexibility of RESTful APIs: Compared to Aptos Indexer's GraphQL API, RESTful API offer reduced query flexibility.

  2. Constraint Query Capabilities: The KV-store architecture provides less flexible query conditions than relational databases, only suited for scenarios with fixed query patterns.


Introduction of Token Locking Standard

The Endless system contract introduces a smart contract, locking_coin_ex.move, to manage token locking and distribution. This contract operates through locking and unlocking mechanisms, ensuring that tokens are gradually unlocked over a specified period, thereby controlling token circulation. Additionally, it provides view APIs to allow users to query locking statuses at any time.

The token locking and release standard established by this contract makes it easier, more transparent, and fairer for all Dapp projects using this contract to manage token assets.

Features Offered by the Contract:

  • Token Locking: The contract enables administrators to lock tokens to a specific address and set an unlocking schedule. Locked tokens are gradually unlocked over the designated period.

  • Token Unlocking: According to the preset unlocking schedule, the contract automatically unlocks a certain amount of tokens at the end of each unlocking cycle.

  • Query Functionality: The contract provides various query interfaces, allowing users to check total locked amounts, details of all stakers, specific staker amounts, and unlocking information.

  • Event Recording: The contract records relevant events during token unlocking and claiming, ensuring traceability and auditability.

Design Purpose

The core of the design is to control the circulation speed of tokens through a "locking" and "gradual unlocking" mechanism, preventing sudden surges in token circulation and thereby stabilizing the market.

Design Objectives:

  • Prevent Market Volatility: Gradual token unlocking helps avoid market fluctuations caused by large-scale token releases.

  • Incentivize Long-Term Holding: The locking mechanism encourages users to hold tokens for the long term, contributing to the token's value stability.

  • Transparent Management: The automatic execution and event recording of the smart contract ensure transparency and traceability in the token locking and unlocking processes.

PreviousApp DemoNextUser Guide

Last updated 2 months ago

Endless Off-Chain multisig consumes less gas and ensures smooth interoperation when integrated with well-designed Dapps, such as the

For examples of "sending transactions with multi-signature accounts" and "managing authentication keys", please refer to:

For more information, please refer:

Please refer for more information.

For more details, please refer to:

For more information, please refer to:

For details on the Digital Asset Standard and more examples, please refer to:

For more details of API usage, refer to:

Endless Multisig Dapp
Your First Multisig
Account Address Format
Sponsored Transaction
Safety Transaction
Endless Fungible Asset Standard
Endless Digital Asset Standard
Token Locking & Distribution
PoS procedure