note-251110-p2p-utility-process-architecture
P2P Utility Process Architecture
Date: 2025-11-10
Issue: - Handle whtnxt://connect Custom Protocol
Status: ๐ In Progress - Architecture Design Phase
Architectural Decision: P2P Service as Utility Process
Context
While implementing the whtnxt:// protocol handler for P2P connections, we need to decide where the P2P networking logic lives in the Electron architecture.
Decision
The P2P connection management service will run as a separate Electron utility process, isolated from both the main process (controller) and renderer process (view).
Rationale
Separation of Concerns (MVC-like Pattern)
- Main Process: Controller - handles application lifecycle, window management, protocol registration, and orchestration
- Utility Process: Service Layer - handles P2P networking, WebRTC connections, signaling, and RxDB replication coordination
- Renderer Process: View - handles UI rendering, user interactions, and presents connection state
Technical Benefits
-
Process Isolation
- P2P networking code runs in its own Node.js process
- Crashes in P2P logic don't take down the main window or app
- Memory leaks or performance issues in WebRTC are isolated
- Easier debugging: can attach Node.js debugger to utility process independently
-
Clean IPC Boundaries
- Main process receives protocol URLs โ forwards to utility process
- Utility process emits connection events โ main process โ renderer
- Clear message-passing architecture enforces loose coupling
- Aligns with Electron security best practices
-
Future Scalability
- Can spawn multiple utility processes for multiple concurrent P2P sessions
- Easier to move to separate service later (per spec's
/servicedirectory vision) - Enables testing utility process independently without Electron overhead
-
RxDB Integration
- Utility process owns RxDB instance for P2P replication
- Renderer can query read-only views via IPC or shared database file
- Separates data sync logic from UI rendering
Architectural Alignment
This aligns with:
- Spec ยง2.3: Helper service for P2P signaling (this utility process is the MVP precursor)
- CLAUDE.md: "Minimize IPC surface" (utility process encapsulates all P2P complexity)
- Security Posture: Further isolation from renderer sandbox
Architecture Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Operating System โ
โ (Receives whtnxt://connect URLs from browser/links) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ Protocol Handler Registration
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MAIN PROCESS (Controller) โ
โ - App lifecycle โ
โ - Window management โ
โ - Protocol registration (app.setAsDefaultProtocolClient) โ
โ - IPC orchestration โ
โ โ
โ Responsibilities: โ
โ 1. Receive whtnxt:// URLs from OS โ
โ 2. Forward to Utility Process via MessagePort/IPC โ
โ 3. Relay connection events to Renderer โ
โ 4. Manage utility process lifecycle (spawn/kill) โ
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโ
โ โ
โ MessagePort/IPC โ IPC via preload
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ UTILITY PROCESS (Service) โ โ RENDERER PROCESS (View)โ
โ - P2P connection management โ โ - React UI โ
โ - WebRTC (simple-peer) โ โ - User interactions โ
โ - Signaling protocol โ โ - Connection status UI โ
โ - RxDB replication engine โ โ - Playlist views โ
โ - Peer discovery โ โ โ
โ โ โ Responsibilities: โ
โ Responsibilities: โ โ 1. Display connection โ
โ 1. Parse whtnxt:// URLs โ โ requests โ
โ 2. Initiate WebRTC โ โ 2. Show peer status โ
โ connections โ โ 3. Render playlists โ
โ 3. Manage peer lifecycle โ โ 4. User confirmations โ
โ 4. Coordinate RxDB sync โ โ โ
โ 5. Emit connection events โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ P2P Network (WebRTC)
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ REMOTE PEERS โ
โ (Other WhatNext instances running same architecture) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Implementation Strategy
Phase 1: Foundation (Issue)
-
Shared Core Library (
/app/src/shared/core)- Protocol types and parsing logic
- P2P message protocol definitions
- Utility process / main process communication contracts
-
Utility Process (
/app/src/utility/p2p-service.ts)- Spawn via
utilityProcess.fork()in main.ts - Receives connection requests via
MessagePort - Manages WebRTC connections using simple-peer
- Emits connection lifecycle events
- Spawn via
-
Main Process Changes (
/app/src/main/main.ts)- Register
whtnxt://protocol handler - Spawn utility process on app startup
- Forward protocol URLs to utility process
- Relay utility process events to renderer via IPC
- Register
-
Preload Script (
/app/src/main/preload.ts)- Expose
p2p.onConnectionRequest(callback) - Expose
p2p.acceptConnection(peerId) - Expose
p2p.rejectConnection(peerId) - Expose
p2p.getConnectedPeers()
- Expose
-
Renderer Integration (
/app/src/renderer/services/)- React hooks:
useP2PConnection(),useConnectedPeers() - UI components for connection requests
- Zustand store for connection state (backed by IPC)
- React hooks:
Phase 2: Advanced Features (Post-MVP)
- Migrate utility process logic to
/servicedirectory (separate repo/process) - Implement signaling server in helper service
- Add WebRTC connection pooling
- Implement CRDT conflict resolution
- Add encryption layer
IPC Communication Protocol
Main Process โ Utility Process
// Main sends to Utility via MessagePort
{
type: 'connection:initiate',
payload: {
peerId: string,
metadata?: Record<string, unknown>
}
}
Utility Process โ Main Process
// Utility sends to Main via MessagePort
{
type: 'connection:request',
payload: {
peerId: string,
displayName: string,
timestamp: string
}
}
{
type: 'connection:established',
payload: {
peerId: string
}
}
{
type: 'connection:failed',
payload: {
peerId: string,
error: string
}
}
Main Process โ Renderer (via IPC)
// Main relays to Renderer via ipcRenderer
ipcRenderer.send('p2p:connection-request', { peerId, displayName })
ipcRenderer.send('p2p:connection-established', { peerId })
ipcRenderer.send('p2p:connection-failed', { peerId, error })
File Structure
/app
/src
/shared # ๐ Shared code across processes
/core
/protocol.ts # whtnxt:// URL parsing
/types.ts # P2P message types
/ipc-protocol.ts # IPC message contracts
/utility # ๐ Utility process (P2P service)
/p2p-service.ts # Main entry point for utility process
/connection-manager.ts # WebRTC connection lifecycle
/signaling-client.ts # Signaling protocol (manual for MVP)
/replication-engine.ts # RxDB P2P replication coordination
/main
/main.ts # Spawn utility process, protocol registration
/protocol-handler.ts # ๐ Protocol URL handling logic
/utility-bridge.ts # ๐ MessagePort bridge to utility process
/renderer
/services
/p2p-client.ts # ๐ IPC client for P2P features
/hooks
/useP2PConnection.ts # ๐ React hook for connection state
/components
/Connection
/ConnectionRequest.tsx # ๐ UI for incoming connection requests
/PeerList.tsx # ๐ UI for connected peers
Security Considerations
-
Process Sandboxing
- Utility process has no window/UI access
- Cannot spawn child processes without explicit permission
- Limited filesystem access (only RxDB data directory)
-
IPC Validation
- All messages validated against schemas before processing
- Peer IDs validated (length, charset) to prevent injection
- Rate limiting on connection requests
-
WebRTC Security
- Only accept connections from known peers (after user approval)
- Implement connection timeout (30s default)
- Validate SDP offers/answers before accepting
Testing Strategy
-
Unit Tests
- Protocol URL parsing (shared core)
- Message validation
- Connection state machine
-
Integration Tests
- Main โ Utility IPC communication
- Utility โ Renderer IPC relay
- Protocol handler registration
-
E2E Tests (Future: Barebones Test Peer)
- Spawn 2 utility processes programmatically
- Simulate connection handshake
- Verify RxDB replication
Migration Path to /service (Phase 3+)
The utility process architecture is designed as a stepping stone to the spec's /service directory vision:
- Current: Utility process spawned by main process
- Future: Standalone service (Express/Fastify) that multiple Electron instances connect to
- Migration: Swap MessagePort IPC with WebSocket client in main.ts; business logic unchanged
The shared core library (/app/src/shared/core) becomes the protocol contract between client and service.
Performance Considerations
- Startup Time: Utility process spawns async after main window loads (non-blocking)
- Memory: ~30-50MB overhead per utility process (acceptable for P2P service)
- IPC Latency: MessagePort is ~0.1ms (negligible for connection events)
- WebRTC Throughput: Isolated process prevents renderer jank during data transfer
Alternatives Considered (and Rejected)
โ Run P2P in Main Process
- Problem: Ties networking logic to app controller
- Problem: Main process complexity grows unbounded
- Problem: Harder to test in isolation
โ Run P2P in Renderer Process
- Problem: Violates security sandbox (Node.js required for WebRTC)
- Problem: Connection survives window close/reload is awkward
- Problem: Cannot run headless for testing
โ Web Workers in Renderer
- Problem: No Node.js APIs (WebRTC requires Node)
- Problem: Limited IPC capabilities
- Problem: Doesn't help with process isolation
Open Questions
-
RxDB Instance Location: Should utility process own RxDB, or should main process own it and utility process coordinate replication?
- Leaning towards: Utility process owns RxDB instance for P2P collections
- Rationale: Keeps all replication logic in one place
-
Multi-Peer Connections: Spawn one utility process per peer, or one utility process managing all peers?
- Leaning towards: One utility process, multiple connections
- Rationale: Simpler IPC, easier state management, can scale to N utility processes later if needed
-
Signaling for MVP: Manual copy-paste or integrate a public signaling service?
- Decision: Manual copy-paste for issue, build signaling in separate issue
Success Criteria
This architecture is successful if:
- Utility process can be spawned/killed independently
- Protocol URL handling works end-to-end (OS โ main โ utility โ renderer)
- Connection state survives renderer hot-reload (dev mode)
- Can test P2P logic without starting full Electron app
- Clear migration path to standalone
/serviceprocess
Next Steps
- โ Document architecture decision (this file)
- Create
/app/src/shared/coredirectory structure - Implement protocol parsing logic
- Create utility process scaffold
- Implement MessagePort bridge in main.ts
- Build IPC relay to renderer via preload
- Test with two Electron instances
References
- Issue: Handle
whtnxt://connectCustom Protocol - Spec ยง2.3: Backend & Network Architecture
- CLAUDE.md: Architecture Principles (IPC Communication)
- Electron Docs: Utility Process
- note-251109-custom-protocol-barebones-peer.md: Test peer architecture