Exemplary http processing protocol design

  • http
  • protocol
  • web-application
  • network
  • architecture
  • security
  • nlap
  • microservices
  • english

posted on 26 Dec 2025 under category network

Post Meta-Data

Date Language Author Description
26.12.2025 English Claus Prüfer (Chief Prüfer) Exemplary HTTP Processing and Protocol Design - Reimagining Web Application Protocols

Exemplary HTTP Processing / Protocol Design

Foreword

The Hypertext Transfer Protocol (HTTP) has served as the backbone of the World Wide Web for over three decades. However, as web applications have evolved from simple page-serving systems to complex, distributed microservices architectures, the fundamental limitations of HTTP have become increasingly apparent. This article examines the critical issues with current HTTP implementations and proposes a modernized approach to web application protocols.

EmojiMicroscopeEmojiMicroscopeEmojiMicroscope

The HTTP/1.1 Problem

Ancient Protocol for Modern Times

HTTP/1.1, standardized in 1997, was designed during an era when the primary purpose of the web was to serve static HTML pages. The protocol’s architecture reflects this page-centric paradigm, which has become increasingly misaligned with contemporary web application requirements.

De-multiplexing Issues

HTTP/1.1 introduced pipelined connections as a feature designed to overcome performance limitations by allowing multiple requests to be sent over a single HTTP connection/socket without waiting for responses. This feature was specifically intended to circumvent head-of-line blocking problems, as the protocol specification did not require strict serial send/receive ordering.

Why Pipelined Connections Failed:

The pipelined connections feature ultimately did not succeed due to protocol errors in HTTP/1.1 proxy processing. While the HTTP/1.1 specification allowed for pipelined requests, proxy implementations contained critical bugs:

Key Problems:

  • Proxy Implementation Errors: HTTP/1.1 proxies incorrectly processed pipelined connections, introducing serialization requirements not mandated by the protocol
  • Response Ordering Corruption: Proxies failed to maintain proper request-response matching in pipelined scenarios
  • Unpredictable Behavior: The same pipelined request stream would work correctly sometimes and fail other times depending on proxy state
  • Non-Deterministic Failures: Intermediaries could arbitrarily reorder, delay, or drop pipelined requests

Because proxy implementations could not reliably handle pipelined connections, browsers and applications were forced to either:

  1. Disable pipelining entirely (the common choice)
  2. Open multiple parallel connections to work around the limitation
  3. Revert to strict serial request/response patterns

This proxy-induced failure of HTTP/1.1 pipelining fundamentally degrades performance for modern, scaled web applications that require concurrent, asynchronous communication patterns. The feature that was supposed to solve performance problems became unusable due to broken intermediary implementations.

HTTP Pipelining

HTTP Evolution: Progress and Limitations

HTTP/2: The Binary Revolution

HTTP/2, introduced in 2015, was designed primarily to solve the head-of-line blocking problem. However, this problem had already been solved by HTTP/1.1 pipelined connections, which allowed non-serial transmission of multiple requests into one socket. The HTTP/2 designers focused on re-solving this already-solved problem rather than addressing the root cause: broken proxy implementations.

What HTTP/2 Designers Focused On:

  • Head-of-Line Blocking: The primary design goal was to eliminate request queuing
  • Multiplexing: Allowing multiple concurrent streams over a single connection
  • Binary Protocol: Replacing text-based HTTP/1.1 with binary framing

The Irony:

HTTP/1.1 pipelined connections had already solved head-of-line blocking by enabling non-serial request transmission. Instead of fixing the buggy proxy implementations that broke pipelining, the industry chose to create an entirely new protocol (HTTP/2) to work around the same problem.

HTTP/3: The UDP Experiment

HTTP/3, introduced with the QUIC protocol, represents the latest attempt to evolve HTTP for modern web requirements. The protocol brings several technical innovations, most notably the “Connection migration” feature—a genuinely useful capability that allows connections to survive network changes (such as switching from WiFi to cellular).

The UDP Decision:

However, HTTP/3’s fundamental architectural choice—switching from TCP to UDP at the transport layer—seems questionable due to the added complexity in proprietary error and retransmission handling. Furthermore, since HTTP/2 did not actually fix the head-of-line blocking problem, the entire premise of HTTP/3 becomes questionable.

A Simple Solution: UUID-Based Request Tracking

The Elegant Fix for HTTP/1.1

A remarkably simple solution exists for HTTP/1.1’s serialization problem that was never adopted:

Implementation:

  1. Client Side: Generate and send a unique UUID (Universally Unique Identifier) with each request header
  2. Server Side: Include the corresponding UUID in each response header

The Industry’s Wrong Direction:

Despite the simplicity of this UUID-based solution, the industry chose a different path. Instead of adopting straightforward fixes like UUID tracking or repairing broken proxy implementations, the industry invested billions into creating increasingly complex protocol versions (HTTP/2, HTTP/3) that fail to address fundamental architectural problems. This pattern reveals a systemic issue: rather than fixing root causes, the industry adds layers of complexity that perpetuate the same page-serving paradigm while modern applications need something entirely different.

This realization leads to an unavoidable conclusion: the industry is going in the complete wrong direction. Continuing to patch HTTP’s fundamental mismatch with modern application architectures is futile. What’s needed is not another HTTP version, but a complete new protocol designed from the ground up for how applications actually communicate today.

EmojiPray Note: Despite the maturation of high-capacity network technologies, including fiber-to-the-premises connectivity and 800 Gbit/s Ethernet architectures, dynamic web applications such as analytics dashboards and e-commerce platforms continue to exhibit suboptimal page-load performance, with end-user response times frequently exceeding acceptable thresholds by several seconds even at the close of 2025.

Proposing NLAP: Next Level Application Protocol

A Fresh Start

Rather than continuing to patch an aging protocol, we propose a fundamental redesign: NLAP (Next Level Application Protocol).

Design Principles

  1. 🎯 Purpose-Built: Designed for modern web applications, not document retrieval
  2. Simplicity: Minimal protocol overhead, maximum efficiency
  3. Performance: Meet real-time requirements
  4. 🔀 Clear Separation: Different protocols for different use cases
  5. 🔐 Security-Centric: Built-in security rather than bolted-on

Encryption / AAA / Security Layer Model

NLAP embraces a layered security model that fundamentally differs from HTTP’s approach. Rather than embedding security mechanisms directly into the application protocol, NLAP delegates Transport Layer Encryption and AAA (Authentication, Authorization, and Accounting) to a dedicated security layer.

Centralized Security Proxy:

Security functions are handled by a central “Proxy” component, positioned between clients and application servers. This architectural decision yields several critical advantages:

  1. Reduced Protocol Complexity: NLAP protocols remain lightweight, focusing purely on application communication without security overhead
  2. Flexible Deployment: The security proxy can be deployed as:
    • A single server setup for simple deployments
    • Decapsulated components (Kubernetes pods or similar) for cloud-native architectures
    • SDN/OpenFlow modules for network-level integration
  3. Independent Security Evolution: Security mechanisms can be updated and hardened without modifying application protocols
  4. Unified Policy Enforcement: Single point for implementing authentication and authorization across all NLAP sub-protocols

Clean Layered Architecture:

This separation creates a clear security boundary, enabling better firewall configurations, simplified auditing, and reduced attack surface compared to HTTP’s monolithic security model where TLS, authentication, and application logic are tightly coupled.

Modular Authentication:

Authentication / Authorization modules (e.g. SSO, Client Certificates) are integratable at this layer.

Connection Migration / Load Balancing:

On top of “Network Connection Migration”, the NLAPP (Next Level Application Proxy Protocol) supports transparent TCP session migration between backend servers on single server outages.

NLAP Sub-Protocols

NLAP consists of three specialized sub-protocols, each designed for specific communication patterns:

  • 📡 NLAMP (Next Level Application Metadata Protocol): Application server communication for service calls and JSON-based data exchange
  • 📁 NLAFP (Next Level Application File Protocol): Static file delivery for images, stylesheets, and JavaScript resources
  • 🔌 NLASP (Next Level Application Socket Protocol): Real-time bidirectional communication for chat, messaging, and live updates

NLAMP: Next Level Application Metadata Protocol

NLAMP serves as the primary application server protocol, designed specifically for modern service-oriented architectures and API communication patterns.

Primary Use Cases:

  • Service Calls: Remote procedure calls (RPC) to backend services
  • Input JSON: Client applications provide structured JSON input for processing
  • JSON Results: Server responses delivered as structured JSON data

Key Characteristics:

  • Lightweight request/response model optimized for API calls
  • Native JSON support eliminating serialization overhead
  • UUID-based request tracking for reliable request-response correlation
  • Designed for high-frequency, low-latency microservice communication

NLAMP replaces the generic HTTP request/response pattern with a protocol specifically tailored for application-to-application communication, eliminating the page-centric overhead of traditional HTTP while maintaining simplicity and performance.

NLAFP: Next Level Application File Protocol

NLAFP handles all static file exchange requirements, providing efficient delivery of non-streamed resources essential for web application functionality.

Primary Use Cases:

  • Static File Download: Non-streamed retrieval of complete files
  • Image Resources: PNG, JPEG, SVG, and other image formats
  • CSS Stylesheets: Application styling and theme files
  • JavaScript Files: Client-side application logic and libraries

Key Characteristics:

  • Optimized for small to medium-sized file transfers
  • Simple request/response pattern without streaming complexity
  • Built-in support for caching headers and validation
  • Efficient handling of concurrent file requests
  • Clear separation from application logic and real-time communication

Note: NLAFP is explicitly designed for non-streamed file transfer. Large file downloads, video streaming, and similar use cases requiring chunked or streamed delivery would use different mechanisms or protocol extensions.

NLASP: Next Level Application Socket Protocol

NLASP provides direct WebSocket-style connections for scenarios requiring persistent, bidirectional communication between server and client.

Primary Use Cases:

  • Chat Applications: Real-time messaging between users
  • Live Notifications: Server-initiated push updates to clients
  • Collaborative Editing: Multi-user document or application state synchronization
  • Gaming: Low-latency bidirectional communication for interactive applications

Key Characteristics:

  • Persistent connection model eliminating connection overhead
  • Bidirectional message flow (client-to-server and server-to-client)
  • Low-latency communication suitable for real-time applications
  • Message-based protocol with clear framing
  • Native support for connection lifecycle management

NLASP recognizes that certain application patterns require fundamentally different communication models than request/response. By providing a dedicated socket protocol, NLAP eliminates the need for WebSocket tunneling through HTTP, resulting in cleaner architecture and better performance.

Dedicated TCP Ports

One of NLAP’s most significant advantages over HTTP is the use of dedicated TCP ports for each sub-protocol, enabling superior firewall management without requiring deep packet inspection (DPI).

Port Assignments:

  • Port 65000: NLAMP (Next Level Application Metadata Protocol)
  • Port 65001: NLAFP (Next Level Application File Protocol)
  • Port 65002: NLASP (Next Level Application Socket Protocol)

Firewalling Benefits:

Traditional HTTP/HTTPS architectures force all application traffic through ports 80/443, making it impossible to differentiate between different types of communication at the network layer. Firewalls must either:

  1. Allow all traffic on these ports (overly permissive)
  2. Implement deep packet inspection (computationally expensive, privacy-invasive)
  3. Terminate SSL/TLS connections for inspection (security compromise)

NLAP’s dedicated port model solves these problems fundamentally:

Granular Access Control:

  • Protocol-Level Filtering: Firewalls can allow/deny specific protocols based solely on port numbers
  • No DPI Required: Simple port-based rules achieve fine-grained control without inspecting packet contents
  • Clear Intent: Port numbers immediately identify communication purpose and expected behavior
  • Simplified Policy: Network administrators write simple, maintainable firewall rules

Protocol Encapsulation / Schema Validation

NLAP employs a multi-layered schema validation architecture to establish robust message integrity and security guarantees. The protocol specification mandates that each message envelope be structured in XML syntax with formally defined Document Type Definitions (DTDs), establishing a clear separation between protocol infrastructure and application payload.

The server-side validation infrastructure leverages Apache Xerces’ DTD validation engine, with all protocol message schemas preloaded during initialization to ensure deterministic validation behavior and eliminate runtime schema resolution overhead. This architectural decision yields three critical advantages:

Interoperability and Standards Compliance: The adoption of XML/DTD as the envelope format ensures broad toolchain compatibility and adherence to established W3C standards, facilitating seamless integration across heterogeneous system architectures.

Structural Integrity Enforcement: Rigorous schema validation at the protocol boundary guarantees well-formed message structures, eliminating entire classes of parsing vulnerabilities and injection attacks that plague loosely-typed protocol implementations.

Attack Surface Reduction: By enforcing strict schema compliance prior to application-level processing, the validation layer serves as a critical security control, rejecting malformed or malicious payloads before they reach business logic layers, thereby significantly constraining exploitation vectors.

YANG Modeling

NLAP adopts YANG (Yet Another Next Generation) as the canonical data modeling language for protocol specification, complementing the XML/DTD envelope definitions with a formal, machine-readable contract language. This dual-representation strategy reflects contemporary best practices in network protocol design, where YANG has emerged as the de facto standard for modeling configuration and state data in IETF specifications (RFC 7950).

The YANG models serve multiple critical functions within the NLAP ecosystem:

Formal Specification and Documentation: YANG’s declarative syntax provides unambiguous protocol semantics, eliminating interpretational ambiguities inherent in natural-language specifications. Version-controlled YANG models constitute a normative reference for protocol evolution, with each revision explicitly documenting schema modifications, deprecations, and extensions.

Toolchain Integration: The YANG ecosystem provides extensive code generation capabilities, enabling automatic derivation of validation logic, serialization frameworks, and API bindings across multiple programming languages. This automation reduces implementation errors and accelerates client library development.

Standards-Track Publication: The XML/DTD/YANG tri-format specification framework positions NLAP for formal standardization through Next Level RFC publications. This standards-oriented approach ensures long-term protocol stability, vendor-neutral governance, and community-driven evolution consistent with Internet Engineering Task Force (IETF) protocols development processes.

Modern Application Packaging

NLAP proposes a novel approach to application initialization:

Concept: Application Package(s) on Startup

Implementation:

  1. Client requests application
  2. Server responds with application package status (custom 2xx response)
  3. Package delivered: nice-app-v1.2.tar.bz2
  4. Package contains:
    • Base HTML skeleton
    • Stylesheets and themes
    • JavaScript bundles
    • Multi-language text resources
    • Image assets

Advantages:

  • Single download for complete (or sub-part) application structure
  • Efficient caching of application bundles
  • Reduced number of NLAFP requests
  • Clear separation between static and dynamic content

Update Mechanism:

  • Client sends current version hash
  • Server responds with new package only if updated
  • Differential updates possible for large applications
  • Clear versioning eliminates cache invalidation issues

HTTP/1.2 Prototype: Proof of Concept

Reference Implementation

A non-finished prototype implementation at https://github.com/WEBcodeX1/http-1.2 demonstrates these concepts in practice.


References:

  • HTTP/1.2 Prototype: https://github.com/WEBcodeX1/http-1.2
  • RFC 2616: HTTP/1.1 Specification
  • RFC 7540: HTTP/2 Specification
  • RFC 9114: HTTP/3 Specification
  • YANG RFC 6020: YANG Modeling Language

Related Articles: