Service Architecture · Internal Reference

ZuluTrade CRM at a glance

A top-down walkthrough of every service that powers the ZuluTrade platform — from user sign-up, to broker connection, to live trade signal routing. Click any module card below to jump to its detailed breakdown.

System at a Glance
Zulutrade CRM Application Server User Onboarding Sign-up · SSO · sessions Click to jump to User Onboarding detail Account Connection Leaders · copiers · broker connect Click to jump to Account Connection detail Copier Flow Signup → subscription → start copying Click to jump to Copier Flow lifecycle narrative Leader Flow Application → admin approval → registered Click to jump to Leader Flow lifecycle narrative Trading Flow Signal pipeline · ingest → execute Click to jump to Trading Flow detail Bridges ACT Bridge · node-middleware Click to jump to Bridges detail Analytics Trade history → ClickHouse → stats Click to jump to Analytics detail Rewards Copy-trading earnings · payouts Click to jump to Rewards detail Badges Gamification · milestones → history Click to jump to Badges detail Other Services Notifications · subs · rewards · community Click to jump to Other Services detail Admin admin-express · admin-service · admin-mysql Click to jump to Admin detail Ops / Data Pipeline data-pipeline · import-history Click to jump to Ops / Data Pipeline detail Data & Infra Storage strategy · databases · caches Click to jump to Data & Infra detail Inventory Full service runtime reference Click to jump to Inventory reference ↓ click any module to jump to its detail Databases / Stores MariaDB PostgreSQL Redis ⚠ MongoDB ClickHouse RabbitMQ 🐇 pink = persisted stores orange = message bus

Click any module to jump to its detailed section below. Each section lists its external integrations.

Module 01 · Application Layer

User Onboarding & Login

How a new user goes from zero to an authenticated session. The flow starts at client-express, writes a lead into legacy MariaDB, fires an OTP email via communication → Sendgrid, then (on OTP submit) creates the canonical user record in both MariaDB and Postgres and broadcasts a userRegister event on RabbitMQ. SSO logins skip the OTP step entirely.

🔗 External integrations
Apple / Google SSO Sendgrid
User Onboarding / Login FE (Browser) client-express auth-service Apple / Google SSO flavour-service client-mysql communication Sendgrid MariaDB (legacy) zulu3_biz · lead · user session 🐇 userRegister (RabbitMQ) users-service zulu3 (Postgres) register SSO verify country · flavour id lead / user OTP email persist POST user JWT / session

Services

client-express
Public HTTP gateway. All browser traffic enters here and is fanned out to internal services.
client-mysql
Thin adapter over the legacy MariaDB databases (zulu3_biz, zulu3_session). Used only for login and legacy registration compatibility.
users-service
Canonical user identity. Owns the user record in zulu3 Postgres. All user lookups go here post-onboarding.
auth-service
Issues sessions / JWTs. Talks to Apple & Google SSO providers for federated login.
flavour-service
Domain-level config: country list, white-label flavour toggles, feature flags per deployment.
communication
Owns transactional email. Reads templates from zulu3_communication and dispatches via Sendgrid.

Sign-up Sequence — Email + OTP

  1. Register request — FE submits the registration form to client-express.
  2. Lead createdclient-express writes a lead record into MariaDB via client-mysql.
  3. OTP emailclient-express triggers communication service, which dispatches the OTP email through Sendgrid.
  4. OTP verified — user submits the OTP; client-expressclient-mysql creates the user record in MariaDB. This record is later used by client-express to issue JWT tokens and manage login sessions / auth.
  5. Postgres mirrorclient-express sends a POST to users-service, which creates the matching user record in the Postgres zulu3 database.
  6. Event broadcastusers-service publishes a userRegister event on RabbitMQ so every downstream service can react.
  7. ACT customer profileconnector-hub listens to the userRegister event and passes it on to temp-trader, which creates an ACT customer profile. On successful profile creation, an ActCustomer event is published on RabbitMQ; users-service consumes it and stores the details in the Postgres user profile.

Sign-up Sequence — SSO (Google / Apple)

  1. client-express forwards the SSO request directly to auth-service.
  2. The provider verifies email ownership. On success, client-express creates the user if it does not already exist (skipping the OTP step — the provider has already verified the address) and issues the session without requiring a password.

Flavour / Country

On the registration form, flavour-service provides the country list and tells the FE which flavour ID is linked to the current domain, so the right white-label configuration and feature flags are applied.

⚠ Tech Debt
MariaDB (zulu3_biz, zulu3_session) is the legacy store kept only for login & registration parity. All new user state should be written to Postgres zulu3. A full cutover is on the roadmap.
↑ Back to overview
Module 02 · App + ACT/FTL Layer

Account Connection

Once the user is registered & logged in, they must choose to be a Leader or Copier. FE asks connector-hub for available platforms: a Leader can connect MT4, MT5, ACT, Ctrade, Telegram or Discord; a Copier can connect MT4, MT5, ACT or Ctrade. connector-hub issues a unique ID, mounts the signal source on the relevant bridge (act-bridge / social-bridge), creates a mirror ACT account via temp-trader, and — for a Leader — sets up the Leader profile on FTL via leader-service.

🔗 External integrations
FTL S281 (demo) ACT Brokers K8s cluster node-middleware
Account Connection · combined landscape FE (Browser) client-express connector-hub 🐇 sessions queue social-bridge (node-middleware) act-bridge K8s MT pod Ctrade TG / Discord ACT broker 🐇 ActCustomer event temp-trader Leader / Copier profile leader-service copier-service FTL users-service Postgres zulu3 pick platform publish state MT / Ctrade / TG / DC ACT state events ACT account consume Leader Copier

How it works

After login, FE asks connector-hub for the platforms available to this user's role. Leader can pick from MT4, MT5, ACT, Ctrade, Telegram, Discord. Copier can pick from MT4, MT5, ACT, Ctrade.

Common steps for any platform:

1. connector-hub issues a unique account ID and forwards the connection request to the relevant bridge — social-bridge (node-middleware) for MT4/MT5/Ctrade/TG/Discord, act-bridge for ACT.
2. The bridge mounts the signal source and publishes state transitions on the sessions queue ({connector_connected, synced} progressing from false/falsetrue/falsetrue/true).
3. connector-hub consumes those state events. Once {connector_connected: true, synced: true} arrives, it requests temp-trader to create a mirror ACT customer account (for trade mirroring).
4. temp-trader creates the ACT account and publishes an ActCustomer event on RabbitMQ; users-service consumes it and stores the details in the Postgres user profile.
5. If the role is Leader, leader-service creates the Leader profile on FTL. If the role is Copier, copier-service wires the copy relationship.

MT4 / MT5 · account connection User (FE) connector-hub social-bridge (node-middleware) credentials issue address + connect req 🐇 sessions queue { connector_connected: false, synced: false } consume K8s MT4/MT5 terminal pod create pod 🐇 sessions queue { connector_connected: true, synced: false } ping container → pull profile 🐇 sessions queue { connector_connected: true, synced: true } connector-hub temp-trader ACT account consume synced request pull symbols from social-bridge → symbol mapping leader-service → FTL if Leader

How it works

1. Submit credentials — User submits MT4/MT5 credentials from FE. connector-hub issues an address and sends a connection request to social-bridge (node-middleware).

2. Initial state — social-bridge publishes an event on the sessions queue with { connector_connected: false, synced: false }. connector-hub consumes it to track state.

3. Pod creation — social-bridge creates an MT4/MT5 terminal pod on the K8s cluster and publishes { connector_connected: true, synced: false }.

4. Profile sync — social-bridge pings the MT container, pulls profile data, and publishes { connector_connected: true, synced: true }.

5. Mirror account — connector-hub requests an ACT account via temp-trader. On success, symbols are pulled from social-bridge and symbol mapping is performed.

6. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship to the chosen leader(s).

Ctrade · account connection User (FE) connector-hub social-bridge (Ctrade adapter) Ctrader OAuth link 1. credentials 2. issue address 3. OAuth login 4. social-bridge receives authorized accounts list via OAuth success → returns list to user 5. User selects the account to mount connector-hub social-bridge 6. child address + connect req 🐇 sessions queue 7a. { connector_connected: false, synced: false } 🐇 sessions queue 7b. { connector_connected: true, synced: false } 🐇 sessions queue 7c. { connector_connected: true, synced: true } connector-hub temp-trader ACT account on true / true 8. request ACT account 9. pull symbols → symbol mapping leader-service → FTL if Leader

How it works

1. Credentials — User submits credentials from FE. connector-hub issues an address and forwards the request to social-bridge (Ctrade adapter).

2. OAuth login — social-bridge redirects the user through the Ctrader OAuth link. User authorizes access at Ctrader.

3. Authorized accounts list — On successful OAuth, social-bridge receives the list of authorized Ctrader accounts and returns it to the user.

4. User selects account — User picks which Ctrader account to mount.

5. Child address + connection requestconnector-hub issues a child address for the selected account and sends the account connection request to social-bridge.

6. Sessions events (3 states) — social-bridge publishes three events on the sessions queue: {false,false}{true,false}{true,true}.

7. Mirror ACT account — On the final {connector_connected:true, synced:true} event, connector-hub requests a mirror ACT account from temp-trader. Symbols are then pulled and symbol mapping runs.

8. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship. Ctrade is available for both roles.

ACT · account connection User (FE) connector-hub act-bridge credentials issue address + connect req ACT broker session open session session ready connector-hub temp-trader ACT mirror account pull symbols → symbol mapping leader-service → FTL if Leader

How it works

1. Credentials — User submits ACT credentials. connector-hub issues an address and sends a connection request to act-bridge.

2. Open ACT broker sessionact-bridge opens the ACT broker session directly (no sessions-queue hand-off for ACT). On success, the session is handed back to connector-hub.

3. Mirror ACT account — connector-hub requests a mirror ACT account via temp-trader. Symbols are pulled and symbol mapping runs.

4. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship.

Note: ACT is available for both Leader and Copier roles.

Telegram · account connection (Leader only) User (FE) connector-hub social-bridge (Telegram adapter) TG handle + token issue address + connect req 🐇 sessions queue { connector_connected: false, synced: false } TG bot / channel listener verify + attach 🐇 sessions queue { connector_connected: true, synced: true } connector-hub temp-trader ACT mirror account leader-service → FTL

How it works

Leader-only path. Telegram is not available as a Copier source — copying flows still use MT/ACT/Ctrade for execution.

1. Connect handle — User provides their Telegram handle / bot token. connector-hub issues an address and forwards to social-bridge (Telegram adapter).

2. Sessions state — social-bridge publishes { connector_connected: false, synced: false }, attaches a TG bot / channel listener, then publishes { connector_connected: true, synced: true }.

3. Mirror ACT + Leader profile — temp-trader creates the mirror ACT account (so copiers can execute real trades off the Leader's TG signals); leader-service sets up the Leader profile on FTL.

Discord · account connection (Leader only) User (FE) connector-hub social-bridge (Discord adapter) Discord user + server issue address + connect req 🐇 sessions queue { connector_connected: false, synced: false } Discord channel / webhook verify + attach 🐇 sessions queue { connector_connected: true, synced: true } connector-hub temp-trader ACT mirror account leader-service → FTL

How it works

Leader-only path. Discord (like Telegram) is not available for Copiers — their execution still uses MT/ACT/Ctrade.

1. Connect server — User provides their Discord user + server / channel. connector-hub forwards to social-bridge (Discord adapter).

2. Sessions state — social-bridge publishes { connector_connected: false, synced: false }, attaches to the Discord channel / webhook, then publishes { connector_connected: true, synced: true }.

3. Mirror ACT + Leader profile — temp-trader creates the mirror ACT account; leader-service sets up the Leader profile on FTL.

Services

connector-hub
Central orchestrator. Issues the unique account ID, routes the request to the right bridge, tracks session state from the sessions queue, and triggers the mirror-account + Leader/Copier wiring.
social-bridge (node-middleware)
Mounts MT4/MT5 (K8s pod), Ctrade, Telegram, and Discord connections. Publishes state transitions on the sessions queue.
act-bridge
Mounts ACT broker sessions. Handles ACT-specific authentication and symbol discovery.
temp-trader
Creates the mirror ACT customer account for every connection (even non-ACT sources) so that trades can be mirrored. Emits ActCustomer events.
leader-service
Sets up the Leader profile on FTL after a successful connection — so the user's orders are broadcast as signals.
copier-service
Wires the copy relationship for a Copier — allocation, risk caps, copy-start / copy-stop logic. Not used for Telegram/Discord sources.
users-service
Consumes ActCustomer events and stores the details in the Postgres user profile.

Role availability

  • Leader → MT4, MT5, ACT, Ctrade, Telegram, Discord.
  • Copier → MT4, MT5, ACT, Ctrade only (social sources can't act as copy execution venues).
↑ Back to overview
Module 03 · Lifecycle

Copier Flow

How a user becomes a Copier — signup through subscription gating, leader selection, and start-copying activation, then the symmetrical closeout. This module is a narrative that leans on the other modules for detail; it doesn't re-describe services that already have a home elsewhere.

Not in this module: signal ingestion, trade execution, bridges, fills, persistence, analytics. Those live in Modules 05 (Trading Flow), 06 (Bridges), 07 (Analytics). See the Trading Flow module for what happens once a Copier is active.

🔗 External integrations
Stripe FTL S281 (demo)
Copier Flow · signup → active copy session 1. Account setup auth · users 2. Broker connection connector-hub · temp-trader 3. Subscription gating subscriptions · Stripe entitlement + eligibility 4. Leader discovery leaders-service → FTL 5. Start copying connector-hub → copier-service (create follower on FTL) → publish startCopy on RabbitMQ · rewards-service creates copy session → notifications-service sends confirmation Copier is active Trading Flow takes over → 6. Stop copying (closeout) connector-hub → copier-service remove · publish stopCopy · rewards-service closes session user action

Phase 1 · Account setup

User signs up or logs in. Detail lives in Module 01 Onboarding & Login — SSO flows (Apple / Google), OTP verification, profile writes by users-service, session issuance by auth-service. Not duplicated here.

Phase 2 · Broker connection

User connects a trading account. Detail lives in Module 02 Account Connection. The orchestrator is connector-hub; demo accounts route through temp-trader to S281. Live accounts go through the MT4 / MT5 / ACT / cTrader connectors.

Phase 3 · Subscription gating

The commercial layer. Copy-trading is gated by a subscription — free or paid.

  • Paid pathsubscriptions exposes plan / cart / payment endpoints. The user checks out through Stripe; a Stripe webhook triggers activation; the subscription is marked active and feature entitlements are written.
  • Free path — a background consumer inside subscriptions processes free-plan activation events. Passing eligibility → auto-activation; failing → user falls back to a paid plan.
  • Credits are allocated per plan; audit history is recorded; state lives in PostgreSQL.

Without an active subscription with the right entitlement, the Start Copying step (below) is blocked.

Phase 4 · Leader discovery / selection

The UI lists available Leaders. client-express fronts the requests; Leader records are fetched via leaders-service, which forwards to FTL. leaders-service owns no data — it's a thin HTTP gateway with logging / validation.

Phase 5 · Start copying

The actual activation. connector-hub orchestrates:

  1. Calls copier-service to create the follower record — forwarded to FTL.
  2. Publishes a startCopy event on RabbitMQ (Social Trader broker, vhost /zulu, exchange zulu).
  3. rewards-service consumes startCopy, creates a copy session in PostgreSQL, and schedules recurring reward jobs via BullMQ. Detail in Module 08 Rewards.
  4. notifications-service sends a "you're now copying X" confirmation via the user's preferred channel.

From this point on, the Copier is active. Signals from the Leader's trades flow through Module 05 Trading Flow and Module 06 Bridges — not this module.

Phase 6 · Stop copying (closeout)

  1. User stops from UI → connector-hubcopier-service removes the follower record on FTL.
  2. connector-hub publishes stopCopy on RabbitMQ.
  3. rewards-service closes the copy session. Background workers continue to process pending reward transactions, handle overdue / recovery, and run month-end settlement so earned rewards aren't lost even if the service was briefly down.
  4. The subscription itself continues unless the user cancels / pauses separately via subscriptions APIs.
  5. If the subscription ends, copying also stopssubscriptions publishes the subscription-end event; connector-hub reacts by triggering stopCopy for the affected follower records.

Services touched in this lifecycle

ServiceRole in Copier FlowPrimary module
connector-hubOrchestrator (calls copier-service, publishes startCopy/stopCopy)02
copier-serviceFollower CRUD → FTL02
subscriptionsPlan / cart / activation / renewal (Stripe)inline here · breadcrumb in 10
rewards-serviceCopy-session bootstrap on startCopy; session close on stopCopy08
notifications-serviceUser confirmations10
🧭 Where the copy flow picks up next
Once the Copier is active, every trade the Leader places fans out through the signal pipeline to this Copier's execution venue. That's a different module — see Module 05 Trading Flow for the real-time pipeline and Module 06 Bridges for how orders reach the actual broker.
↑ Back to overview
Module 04 · Lifecycle

Leader Flow

How a user becomes a Leader — signup through broker connection, Leader application, admin approval, registration on FTL, and strategy setup. Unlike Copier Flow, Leader registration has a human in the loop: an admin reviews and approves each new Leader.

Not in this module: the trade signal publishing once a Leader is active, bridges, analytics. Leader trades flow through Module 05 Trading Flow.

🔗 External integrations
FTL
👑 Admin-gated
Unlike Copier registration (self-serve), Leader registration requires explicit admin approval via admin-express + admin-service. No subscription is required — Leader participation is free-to-register and earning-based.
Leader Flow · application → admin approval → active 1. Account setup auth · users 2. Broker connection connector-hub · temp-trader 3. Leader application connector-hub 4. Admin approval admin-express · admin-service human in the loop rejected flow ends 5. Leader registration + strategy setup connector-hub → leaders-service (create) → FTL → strategy / watchlist configured via connector-hub → notifications-service informs the user Leader is active Trades → Trading Flow · earnings → Rewards 6. Closeout self-serve: user deregisters · or admin revokes via admin-express → connector-hub → leaders-service (remove) approved user or admin

Phase 1 · Account setup

User signs up or logs in. Detail in Module 01 Onboarding & Login — SSO, OTP, profile writes. Not duplicated here.

Phase 2 · Broker connection

User connects the broker whose trades they want to publish. Detail in Module 02 Account Connection. Orchestrated by connector-hub; ACT-side connections go through temp-trader. Demo accounts aren't relevant here — Leaders need a live broker.

Phase 3 · Leader application

The user submits a Leader application from the client UI via connector-hub. The application creates a pending Leader record waiting for admin review.

Phase 4 · Admin approval

An admin reviews the application via admin-express. admin-service handles roles / permissions / approval audit on the admin side. The admin either approves or rejects:

  • Rejected — the flow ends. The user is notified; no Leader record is created on FTL.
  • Approved — the approval action triggers Phase 5. The audit record is retained in admin-service.

Phase 5 · Leader registration + strategy setup

  1. On approval, connector-hub calls leaders-service to create the Leader record. leaders-service owns no state — it forwards to FTL, which holds the actual Leader record.
  2. connector-hub handles leader / watchlist / strategy management (symbols, marketing description, copy settings — exact scope depends on how "strategy" is configured in the platform).
  3. notifications-service sends a "you're now a Leader" confirmation.

Registration is synchronous — no RabbitMQ lifecycle events on the Leader side (unlike startCopy / stopCopy for Copiers).

Phase 6 · Active leader and closeout

Once registered, the Leader's trades become the source of signals. That flow lives in Module 05 Trading Flow. Earnings from copier activity accrue through Module 08 Rewards.

Closeout is two-path:

  • Self-serve — user deregisters from the client UI → connector-hubleaders-service (remove) → FTL.
  • Admin-revoked — admin uses admin-express to revoke the Leader → same downstream path.

Services touched in this lifecycle

ServiceRole in Leader FlowPrimary module
client-expressApplication submission from user UI01
auth-serviceSession + SSO01
users-serviceProfile01
connector-hubOrchestrator (calls leaders-service, owns strategy management)02
leaders-serviceLeader CRUD → FTL02
temp-traderACT-side broker integration02
admin-expressAdmin review + approval gateway11
admin-serviceRoles / permissions / approval audit11
rewards-serviceAccrues Leader earnings from copier activity08
notifications-serviceApproval / rejection / removal notifications10
🧭 Where the leader picks up from here
Once active, every trade the Leader places is a signal in the copy-trading pipeline. See Module 05 Trading Flow for how that signal fans out to Copiers, and Module 08 Rewards for how Leader earnings accumulate.
↑ Back to overview
Module 05 · The Hot Path

Trading Flow

The real-time pipeline that carries trade signals from every entry point — MT4/MT5 terminal, Telegram/Discord, Zulu Web (Demo and Act accounts), and ACT broker feeds — through the common Signal Processor and Signal Out services, out to the execution venues (FTL, ACT brokers, MT terminals). This is the performance-critical heart of the platform.

🔗 External integrations
FTL ACT Brokers Node Middleware (Meta)
🧠 Design principle
Trading has four entry paths — MT4/MT5 terminal, Social (Telegram/Discord), Demo web, and Act platform/web — but they all converge on two common services: Signal Processor and Signal Out. MT & Social come in through Node Middleware → Signal junction → Signal Processor. Web orders enter through Act web trader → Signal Out. Broker-originated events (ACT / Fxview fills) enter via Order Update service over WebSocket and get bridged into RabbitMQ. Signal Processor calls FTL (s281) over HTTP for leader/copier execution; FTL pushes the resulting signals back over WebSocket to act-signal-processor, which publishes ftl-signal events consumed by Signal Out. Nothing on the hot path touches Postgres directly — state lives in Redis, every step is published to RabbitMQ, and the Persistence service consumes signal_orders / trade_logs and writes asynchronously to PGSQL. If Redis is lost the cache is rebuilt from the database.
Trading flow · high-level services overview Entry Middleware Junction Signal Processor External / FTL MT4 / MT5 TG / Discord Act / Demo Web requests Node Middleware Order Update service Act web trader System Signal junction Signal Processor FTL s281 Act Demo act-signal-processor Signal Out service 🐇 signal.out 🐇 ACT queue Persistence 🐇 Persistence queue Persistence service PGSQL Redis (shared) RMQ HTTP WS ftl-signal OU<AccountID> 🐇 web-orders consume → SP (Demo/Act copiers) consume → Node Middleware (MT copiers) · execute on platform 🐇 signal_orders · trade_logs 🐇 Save LEGEND solid = HTTP / WebSocket / direct call 🐇 dashed orange = RabbitMQ event dotted = loop / shared-store reference core common service RabbitMQ queue external system data store Web / terminal source Act / Demo
Flow 1 · Meta (MT4 / MT5) Platform MT4 / MT5 Node Middleware Signal junction 🐇 MT4 queue 🐇 MT5 queue Signal Processor FTL s281 signal-synchronizer (ext) Redis Zulu Web Act web trader 🐇 web-orders queue Signal out service act-signal-processor 🐇 signal.out queue 🐇 Persistence queue Persistence service PGSQL HTTP repaired defect (Redis polled) WebSocket 🐇 ftl-signal MT signals 🐇 signal_orders trade_logs 🐇 signal_orders LEGEND solid = HTTP / WebSocket / direct 🐇 dashed orange = RabbitMQ event queues rendered as pill-shaped orange boxes

How it works

Market order from Meta Terminal: Signal junction receives it and forwards to the platform-specific queue. Signal Processor executes the trade on the FTL account. Order updates come back from Act s281 over WebSocket into Signal Out, which acts on them. If the order is from a leader, act-signal-processor receives copier signals from FTL and pushes them to the signal.out RabbitMQ.

SL/TP add or update from Meta: Signal junction forwards to Signal Processor. We only update Redis + DB — SL/TP is not executed on the FTL account.

Close / partial close from Meta Terminal: Signal junction → Signal Processor → executed on the FTL account.

Pending orders (with or without SL/TP): stored and updated only in Redis + DB. Not executed on FTL until Meta fires the pending order — then Signal Processor runs the trade.

Orders from Zulu Web: come via WebSocket to the Act Trader system → pushed to RabbitMQ → consumed by Signal Out. Rest of the flow is the same as Meta orders.

Note: if a leader adds SL/TP, it is not applied to copier accounts.

Flow 2 · Social (Telegram / Discord) Platform TG / Discord Node Middleware Signal junction 🐇 TG_CONNECTOR q 🐇 DISCORD q Signal Processor FTL s281 signal-synchronizer (ext) Redis Zulu Web Act web trader 🐇 web-orders queue Signal out service act-signal-processor 🐇 signal.out q (MT copiers) 🐇 ACT q (Demo/Act copiers) 🐇 Persistence queue Persistence service PGSQL HTTP repaired defect WebSocket 🐇 ftl-signal MT copiers Demo/Act copiers consume → JAN consume back to SP 🐇 signal_orders trade_logs 🐇 signal_orders LEGEND solid = HTTP / WebSocket / direct 🐇 dashed orange = RabbitMQ event copiers dispatch to MT or ACT queue

How it works

For Telegram and Discord, the flow is the same as Meta with one key difference: we do not push leader events to the signal.out queue. Only copier events are pushed from Signal Out. All actions are executed on the FTL account.

Market / pending order from a social platform: Signal junction → Signal Processor → executed on the FTL account.

SL/TP from a social platform: Signal junction → Signal Processor → executed on the FTL account (social platforms aren't directly connected to a trading platform like Meta).

Close / partial close / modify from a social platform: Signal junction → Signal Processor → executed on the FTL account.

Orders from Zulu Web: via WebSocket to Signal Out → RabbitMQ → Signal Processor → FTL. (Some actions are still pending / not working as expected.)

Note: leader SL/TP is not applied to copiers.

Flow 3 · Demo (Demo account from Web) Zulu Web (Demo) Act web trader 🐇 web-orders queue Signal out service 🐇 ACT queue Signal Processor Redis FTL s281 act-signal-processor 🐇 signal.out q Node Middleware 🐇 Persistence queue Persistence service PGSQL Demo (DemoPublisher) consume HTTP WS 🐇 ftl-signal / order updates MT copiers 🐇 Save signal_orders trade_logs 🐇 Save LEGEND solid = HTTP / WebSocket / direct 🐇 dashed orange = RabbitMQ event Demo web orders loop via Signal out → ACT queue → SP

How it works

Demo accounts are not connected to any external trading platform. The demo account lives on the FTL server (s281); all actions happen on the FTL account. Orders are placed only from Zulu Web.

Market / pending order from Zulu Web: via WebSocket to Act web trader → Signal Out. Signal Out publishes to RabbitMQ based on platform + leader/copier logic. For demo, Signal Processor consumes and executes on the FTL account.

Once the FTL account executes, order updates and copier signals (if any) come back. From here the flow is identical to the Meta flow.

SL/TP add / update, close, partial close from Zulu Web: executed on the FTL account and saved to Redis + DB.

Note: leader SL/TP is not applied to copiers.

Flow 4 · Act Platform Platform Act Fxview s245 Zulu Web (Act) Order Update service Act web trader 🐇 Web order rmq 🐇 act rmq Act-bridge-signal-processor 🐇 act order rmq Signal out service Signal Processor FTL s281 Act Demo act-signal-processor 🐇 act signal out rmq Act Bridge Service Meta Flow (if MT4/5) Demo Flow (if demo) 🐇 Persistence queue Persistence service PGSQL WebSocket WebSocket consume consume HTTP WebSocket 🐇 ftl-signal from web / signal for act loop back if MT4/5 if demo 🐇 Save signal_orders trade_logs 🐇 Save signal_orders LEGEND solid = HTTP / WebSocket / direct 🐇 dashed orange = RabbitMQ event Branch ellipses route to Meta/Demo flows

How it works

The Act flow is similar to Meta, with one key difference: Meta uses Node Middleware, Act uses act-bridge. The rest is mostly the same with a few service swaps.

Order placed from Act Terminal: updates first hit Order Update service → pushed to RabbitMQ → consumed by Act-bridge-signal-processor → creates an event for Signal Processor → Signal Processor executes on the FTL account. Order updates come back from Act s281 via WebSocket to Signal Out. If the order is from a leader, copier signals are generated and pushed to signal.out.

SL/TP add / update from Act: Order Update → RabbitMQ → act-bridge processes logic → saves to Redis + DB. Not executed on the FTL account.

Close / partial close from Act Terminal: Order Update → RabbitMQ → act-bridge processes → Signal Processor → executed on the FTL account.

Pending orders (with SL/TP): stored / updated in Redis + DB via act-bridge-signal-processor only. When Act fires the pending order we get a signal and then execute on FTL.

Orders from Zulu Web: via WebSocket to the Act trader system → pushed to RabbitMQ → consumed by Signal Out → pushed again to RabbitMQ → consumed by act-bridge → executed on the connected broker platform (currently Fxview s245). Updates return through Order Update and continue via act-bridge-signal-processor, which processes the logic and saves to the database.

Pipeline Stages

  1. Ingestion — MT4/MT5 and TG/Discord orders enter via Node Middleware. Web orders (Demo & Act accounts) enter via Act web trader System. ACT platform and Fxview s245 broker events enter via Order Update service over WebSocket.
  2. ClassificationSignal junction routes each MT/Social signal to a platform-specific queue (MT4, MT5, TG_CONNECTOR, DISCORD). Defect detection (price=0 for MT4/MT5/CTrade) writes a defect:<algo>_<order> key to Redis for the external signal-synchronizer to repair.
  3. ProcessingSignal Processor (ts-zulu3.0-signal-processor) consumes the platform queues and the shared ACT queue. It calls FTL s281 via HTTP (leader / copier / order URLs) to execute leader-copier fan-out and publishes signal_orders + trade_logs to the persistence pipeline.
  4. Return / Routing — FTL pushes resulting signals over WebSocket to act-signal-processor, which publishes ftl-signal on the ftl queue. Signal Out consumes it and dispatches via platform publishers: MT copiers → routing key out on signal.out queue (consumed by Node Middleware → terminal); Demo/Act/TG/Discord copiers → routing key ACT on the ACT queue (looped back into Signal Processor); ACT broker dispatch → act signal out queue → Act Bridge Service.
  5. Web-order ack path — For orders placed via Zulu Web, Signal Out also publishes an immediate acknowledgement on routing key OU<AccountID> (Order Update ack) and an RPC reply to the caller's reply queue, then ACKs the web-orders message last.
  6. ACT execution feedback — ACT broker dealer sockets push trade / order events to Order Update service over WebSocket. Order Update republishes on the signalData exchange (routing key order). Act-bridge-signal-processor consumes the act.order queue, normalises, and publishes back into Signal Processor via the ACT queue to keep state coherent.
  7. PersistencePersistence service binds the persistence queue to the signal_orders and trade_logs topics published by Signal Processor and Signal Out. It writes to PostgreSQL (signal_processor.signals, signal_processor.trade_logs) asynchronously, keeping the hot path DB-free. Redis holds the live state; if it is lost the cache is rebuilt from Postgres.

Services

signal-junction
Entry gateway for MT/Social signals. Routes each one to a platform-specific RabbitMQ queue (MT4, MT5, TG_CONNECTOR, DISCORD). Bad signals (e.g. price=0) are written to a Redis defect:* key for the external signal-synchronizer to repair.
signal-processor
The central brain. Consumes all platform queues (MT4, MT5, ACT, Telegram, Discord, cTrader, Demo), validates each signal, decides the order type (new / close / pending / SL / TP), keeps live state in Redis to avoid duplicates, and calls FTL over HTTP to execute. Publishes signal_orders and trade_logs for persistence. Invalid or duplicate signals are rejected safely with traceable logs.
act-signal-processor
Keeps WebSocket connections open to the dealer and FTL feeds. Reads live messages and forwards them to RabbitMQ on FTL routing keys — ftl-signal (copy signals), ftl-execution, ftl-order, unregister — so Signal Processor and Signal Out react in real time.
signal-out
The outgoing order gateway. Picks the correct destination queue per platform (ACT, MT4/5, CTRADE, SUBCT, TG, DISCORD, DEMO), processes FTL signal / execution feedback, and handles order state (open / close / modify / cancel / SL / TP) in Redis. Also drives the web-orders flow: validates web requests, converts them to the internal format, forwards for execution, and replies via the response queue.
act-bridge
Adapter between the internal system and each broker's ActTrader platform. Consumes act.signal.out (market, limit, stop, trailing, pending, modify, cancel, close), resolves the broker's session token, and calls that broker's ACT HTTP API to actually execute. Notifies the user on failure.
act-bridge-signal-processor
Inbound bridge for other brokers' ACT updates. Listens on the act.order queue, matches the ACT account to a user, reconciles order / trade IDs, handles opens, closes, pendings, and SL/TP changes, and publishes normalized messages back into Signal Processor's pipeline. Keeps Redis in sync and emits persistence events.
order-update
Lightweight WebSocket-to-RabbitMQ bridge for live order and account events (currently FXVIEW feed). Keeps sockets alive, classifies events, and publishes trade / order events to the order topic on signalData, and deposit/withdraw events to DPT_WDL on rabbit-social.
persistence-service
The central data writer. Consumes the persistence queue bound to signal_orders and trade_logs, applies INSERT / UPDATE / UPSERT / DELETE on the database (with conflict-safe updates and order-ID changes), and records execution history for trade logs. Processes updates per-order-ID to keep newer events from being overwritten by older ones.
↑ Back to overview
Module 06 · The Wires

Bridges

Two bridge layers connect the Zulutrade signal pipeline to external execution venues. ACT Bridge speaks the native ACT broker protocol; node-middleware translates signals for MT4, MT5, cTrader, Discord, and Telegram terminals.

🔗 External integrations
ACT Brokers MT4 MT5 cTrader Discord Telegram
Bridges · outbound to venues Signal Pipeline from Trading Flow ACT Bridge act-bridge order-update speaks ACT native protocol receives fills (WebSocket) outbound + inbound on the ACT side node-middleware node-middleware one bridge, five terminal protocols fan-out to terminal ecosystems ACT Brokers MT4 MT5 cTrader Discord Telegram fills (WS)

Sub-bridges

ACT Bridge
Two-service subsystem. act-bridge is the outbound adapter speaking ACT brokers' native protocol. order-update is the inbound WebSocket consumer — fills, partial fills, rejects — feeding back into the trading-flow pipeline.
node-middleware
Single bridge that fans signals out to the terminal ecosystems — MT4, MT5, cTrader, Discord, and Telegram. Handles each platform's native protocol and exposes a uniform internal interface to the rest of the backend.

Services

act-bridge
Outbound adapter that speaks ACT brokers' native protocol.
order-update
WebSocket consumer for broker-originated order events (fills, partial fills, rejects).
node-middleware
Bridge to trading terminals — MT4, MT5, cTrader, Discord, Telegram.
🧭 Placement
The Bridges sit between the Trading Flow pipeline and the world outside. Signals flow outward via act-bridge and node-middleware; execution events flow back in via order-update. Every bridge here runs alongside the other Zulutrade services on the Application Server.
↑ Back to overview
Module 07 · Analytics

Analytics

How trading history becomes charts. Trades flow out of ACT-281 via the auth-service exports plugin, land on disk as files, get ingested and transformed by social-analytics into ClickHouse, and finally surface through stats-service to the client and admin UIs.

🔗 External integrations
ACT-281 ClickHouse
Analytics · trade history → charts ACT-281 trade source auth-service exports plugin (also in Onboarding) files exported trades social-analytics ClickHouse stats-service client / admin UI trades exports reads writes queries charts

Services

auth-service · also in Onboarding
Listed here for the exports plugin. The plugin exports trades from ACT-281 and writes them to disk as files; social-analytics picks them up from there. The rest of auth-service's responsibilities (SSO, broker-auth, OTP, sessions) are covered in User Onboarding.
social-analytics
Ingests the exported trade files, transforms them, and writes the result into ClickHouse. Stable interface for everything downstream — stats-service queries here, not ClickHouse directly.
stats-service
Analytics API layer — chart/metrics endpoints (balance, instruments, holding period, top instruments, trade efficiency, overall stats, profit calendar, portfolio/ROI, leader comparison, trade history). Reads analytics through social-analytics; enriches results with calls to connector-hub, users-service, temp-trader.
🚧 Next phase — extract exports plugin
The exports plugin is currently bundled inside auth-service. Planned work: lift it into a dedicated service so auth is just auth, and the exports path has its own release cadence and ownership. No action for this doc; noting it so future readers know the module's internal boundary will move.
↑ Back to overview
Module 08 · Rewards

Rewards

The platform's reward engine for copy-trading. Combines three responsibilities in one service: session lifecycle (from copy start/stop events), periodic reward-job execution (BullMQ workers creating reward transactions, handling overdue/recovery, running month-end settlement), and the payout workflow (wallet + withdrawal modules).

Rewards · session → jobs → payouts RabbitMQ 🐇 startCopy · stopCopy vhost /zulu rewards-service session lifecycle scheduler PostgreSQL sessions · txns Redis + BullMQ reward jobs Background workers create reward transactions overdue / recovery month-end settlement Payout workflow wallet module withdrawal payout-side data management client-express admin-express copy events write enqueue BullMQ triggers payout stats / list

Responsibilities

  • Session lifecycle — creates a copy session when a startCopy event arrives on RabbitMQ; closes it on stopCopy.
  • Reward calculation — schedules recurring reward jobs at configured intervals after session creation.
  • Job execution — background BullMQ workers (Redis-backed) create reward transactions, handle overdue or recovery scenarios, and run month-end settlement so earnings stay consistent even after downtime.
  • Payout workflow — wallet and withdrawal modules own the payout-side data that gets hit after rewards are earned.
  • Public APIs — schedule/stop rewards manually, fetch reward transactions, fetch reward stats, and CRUD-style endpoints on wallet + withdrawal.

Service

rewards-service
NestJs service, backed by PostgreSQL (session + transaction records), Redis + BullMQ (reward jobs), and RabbitMQ (copy-lifecycle intake on the Social Trader vhost /zulu). No external vendor touchpoints — Stripe lives in subscriptions, not here.
🔗 Callers
client-express and admin-express call rewards-service for user-facing reward stats and admin read/management actions. auth-service also references it via the REWARDS_HOST environment variable (used for account-support utilities).
🤝 Boundary with subscriptions
Both subscriptions and rewards-service sit in the billing-adjacent space and both use RabbitMQ + PostgreSQL + Redis/BullMQ. Keep the boundary clean: subscriptions owns entitlements & payment (plan catalogue, Stripe, renewals), rewards owns earnings & payout (copy sessions, transactions, wallet/withdrawal). Copy-lifecycle events pass through rewards, not subscriptions.
↑ Back to overview
Module 09 · Badges

Badges

Gamification engine. Tracks user progress and awards badges when key profile or trading milestones happen. Validates incoming events, maps each topic to a badge rule, updates per-user state, marks badges earned, and writes a history entry for audit. Pure internal — no external vendor touchpoints.

Badges · events → rules → history RabbitMQ 🐇 vhost /zulu · exchange zulu queue INAPP · badges.* badges-service validate · map topic → rule update progress · record PostgreSQL definitions · state · history Redis client-express admin-express Upstream publishers (likely — not traced) auth / users connector-hub verify · account-link · trade-history milestones notifications-service consumes badge.awarded event badges.* write cache HTTP queries badge assigned

Badge event topics

Known topics per the Confluence catalogue. Implementation status per topic not documented here — ignore.

B-MAIL-Vemail verified B-MOB-Vmobile verified B-POR-Vproof of residence B-POI-Vproof of identity B-DEMO-ACCdemo account B-LIVE-ACClive account B-TRADE-HIStrade history B-EAEA

Responsibilities

  • Event intake + validation — pulls messages from badges.*, validates payload structure.
  • Rule mapping — topic → specific badge rule; per-user progress updated in PostgreSQL.
  • Earned marking — when a rule's condition is met, the badge is marked earned for the user.
  • History entry — every change recorded for audit / display.
  • Public APIs — run badge check directly, fetch all active badges (catalogue), fetch user-specific badges with optional account-level context.

Service

badges-service
NestJs. Backed by PostgreSQL (badge definitions, user badge state, badge history), Redis (module wiring / cache), and RabbitMQ (intake on the Social Trader /zulu vhost). No external vendor deps.
🔗 Callers
client-express (via BADGES_SERVICE_URL) and admin-express (per the Confluence admin-linked-services table).
🧭 Upstream publishers (tentative)
The Confluence doc names the topics but not their publishers. Likely owners: auth-service / users-service for the verification events (B-MAIL-V, B-MOB-V, B-POR-V, B-POI-V), connector-hub for account events (B-DEMO-ACC, B-LIVE-ACC), and the persistence / trading pipeline for B-TRADE-HIS. Publishers for B-EA unknown. Flagged as assumption — not traced in code.
🤝 Downstream: notifications-service
When badges-service assigns a badge to a user, it publishes a "badge assigned" event. notifications-service consumes that event and delivers the corresponding notification through the user's configured channels (push, email, SMS, in-app). So the event emitted on badge-award is the handoff between gamification and the notification channel layer.
↑ Back to overview
Module 10 · Application Layer

Other Services

A horizontal band of single-purpose services that support the product surface — notifications, billing, gamification, community, and analytics. All of them sit behind client-express and share the zulu3 Postgres database.

🔗 External integrations
Stripe Mastodon DeepL

Services

notification-service
In-app and push notifications. Fan-out from RabbitMQ events (new follower, order filled, copy stopped, etc.).
subscription-service
Subscription plans & entitlements. Integrates with Stripe for billing and recurring payments.
rewards-service
Promotions, referral credits, and cashback logic.
badges-service
Gamification — awards badges based on trading milestones, tenure, and social behaviour.
community-connector
Bridge to the self-hosted Mastodon instance for social feed, follows, and posts.
stats-service
Per-user and per-leader trading stats (win rate, ROI, drawdown, follower counts).
social-analytics
Cross-platform analytics aggregator. Consumes signals from stats-service and the social bridge layer.

External Integrations

ServiceProviderPurposeProtocol
subscription-serviceStripeBilling & recurring paymentsHTTPS / Webhooks
communicationSendgridTransactional emailHTTPS
auth-serviceApple · GoogleFederated SSOOAuth 2.0
community-connectorMastodon (self-hosted)Social feed & followsREST · ActivityPub
↑ Back to overview
Module 11 · Admin Surface

Admin

The operator-facing side of the platform — a thin gateway (admin-express) fronts all CRM, support, and operational workflows, backed by a MariaDB-only DAL and a dedicated roles/permissions service. The admin tier reuses most of the client-tier backend rather than duplicating logic.

Admin · gateway + shared backend admin-express admin-mysql admin-service client-mysql Reused CRM backend auth-service users-service connector-hub communication temp-traders badges-service notifications-svc rewards-service subscriptions

Admin-owned services

admin-express
Admin HTTP gateway. Per the Confluence interlinking graph it calls 12 backend services — admin-mysql, client-mysql, admin-service (roles/permissions/teams), auth-service, badges-service, connector-hub, communication, notifications-service, rewards-service, subscriptions (subscription data), users-service (user list + detail), and temp-traders (trade history).
admin-mysql
MariaDB-backed DAL for the admin surface. Handles auth and verifies users and admins; HTTP facade over the same MariaDB host the client-tier DAL uses.
admin-service
Admin domain logic — roles, permissions, and teams. Called by admin-express whenever an operator action needs an authorization / access-control check.
🔗 Admin reuses the client-tier backend
The admin tier owns only three services. All CRM functionality (users, auth, badges, rewards, subscriptions, notifications, communication, connectors, temp-trader history) is served by the same backend that powers the client surface. Admin only layers roles/permissions on top via admin-service.
↑ Back to overview
Module 12 · Ops & Data Migration

Ops / Data Pipeline

Background / ops-only services that don't sit on any request path. data-pipeline and data-pipeline-setup run data-migration workloads; import-history backfills historical trading data into the analytics store. These are tooling and plumbing, not product features.

Services

data-pipeline
Bash-driven backend data-migration pipeline. Moves/transforms records between stores during cutover or catch-up windows.
data-pipeline-setup
Bash-driven companion to data-pipeline. Prepares environment, schema, and source/target connectivity before the main pipeline runs.
import-history
NestJs analytics-tagged service. Backfills historical trading data so the analytics layer (used by stats-service) has complete coverage. Referenced by stats-service via the analytics-importer trigger path.
🛠 Out-of-band
None of these three sit on the user's request path. They're invoked by operators or scheduled jobs. Treat them as tooling that's part of the platform but not part of the live traffic graph.
↑ Back to overview
Module 13 · Infrastructure

Data & Infrastructure

Storage choices are deliberate. Every datastore has a single job and a clear owner.

StoreTypeOwnerPurpose
zulu3_biz, zulu3_session MariaDB client-mysql Legacy — login & registration parity only.
zulu3_communication MariaDB communication Email templates, delivery logs, bounce tracking.
zulu3 PostgreSQL users-service · auth-service · most app services Canonical relational store for users, subscriptions, leaders, copiers, badges, rewards.
mastodon PostgreSQL community-connector Backing store for the self-hosted Mastodon instance (social feed).
redis-stack Redis Trading Flow services Hot state for the signal pipeline. Also used by bull-board for job queues.
clickhouse Clickhouse social-analytics · persistence-controller Column-store for analytical workloads — stats, leaderboards, time-series queries.

Operational Tooling

bull-board
UI on top of BullMQ / Redis queues. Used by ops for inspecting and retrying background jobs.
RabbitMQ 🐇
Event bus. Every inter-service async message goes here. Topic exchanges per domain.

Reality check — 2026-04-20

What the code assumes versus what's actually running today. Worth keeping in mind when reading the rest of the doc.

💾 MariaDB on the Application Server
The MariaDB store is mariadbd running directly on the Application Server — not a container. Databases: zulu3_biz, zulu3_admin, zulu3_session, plus fxview_* variants. It's in a replication topology with separate peer machines. The client-mysql and admin-mysql containers are not databases despite the name — they're Node.js DAL services that expose HTTP and speak MariaDB-protocol outbound to MariaDB.
🐇 RabbitMQ is split across multiple brokers today
The doc treats RabbitMQ as a single event bus, and the target architecture is indeed a single broker with vhost isolation (/ for Zulu, /act for ACT). Reality:
  • The Social Trader (legacy) broker carries most Zulu traffic on vhost /zulu — primary bus today. Consumed by connector-hub, subscriptions, users-service, rewards, badges, notifications, auth, community-connector, signal-synchronizer.
  • The ACT 281 broker carries order / fill events on vhost /act.
  • A third broker referenced in some configs is confirmed unused — candidate for cleanup.
  • The on-host RabbitMQ on the Application Server has zero active connections. Awaiting migration cutover.
⚠ MongoDB is non-functional
mongo:latest is MongoDB 5.0+ which requires AVX, and the Application Server's CPU doesn't have it. The Mongo container has been crash-looping tens of thousands of times. Mongo-backed features in client-express and admin-express (affiliate, alerts, tbl_category collections) are broken today. Fix paths: pin to mongo:4.4, or migrate those collections onto the existing Postgres zulu3 database.
🔒 Credentials surface in docker inspect
Several *_PASSWORD values and RabbitMQ amqp://user:pass@… strings are inlined in container env vars — visible to anyone with Docker socket access on the Application Server. Worth Docker secrets or a mounted .env with restricted permissions. Out of scope for this doc, but worth filing.
↑ Back to overview
Reference · Runtime

Inventory

Every service declared in the project's docker-compose.yml, grouped by role, with runtime status (2026-04-20 snapshot). Hosts are named only by their canonical role.

Onboarding · Login

ServiceRoleStatusNotes
client-expressPublic HTTP gateway (client)upConsumes MariaDB, Redis, MongoDB (broken), DeepL, FXView pricing.
admin-expressAdmin HTTP gatewayupSame stores as client-express.
users-serviceCanonical user identityupOwns user records.
auth-serviceSessions / JWT, SSOupApple + Google OAuth.
client-mysqlNode DAL (not a DB)upHTTP facade over MariaDB.
admin-mysqlNode DAL (not a DB)upHTTP facade over MariaDB.
communicationTransactional emailupDispatches via Sendgrid.
admin-serviceAdmin domain logicup
flavours-serviceFeature flags / white-label configup

Account Connection

ServiceRoleStatusNotes
connector-hubCentral orchestratorupTalks to ActTrader, FTL, FXView; subscribes to multiple external RabbitMQ brokers.
temp-traderDemo accounts → S281up
leaders-serviceLeader profilesupIntegrates with FTL.
copier-serviceCopier subscriptionsupIntegrates with FTL.
subscriptionsBilling & subscription lifecycleupStripe integration.
community-connectorMastodon bridgeupSingle target — Mastodon instance.

Engagement

ServiceRoleStatusNotes
notifications-serviceIn-app / push fan-outupConsumes RabbitMQ events.
rewards-servicePromotions, referral creditsup
badges-serviceGamification badgesup
stats-servicePer-user / per-leader statsupUses external analytics source.

Analytics

ServiceRoleStatusNotes
auth-service · also in OnboardingExports plugin · ACT-281 → filesupPlugin exports trades from ACT-281 to disk; to be extracted into its own service in a later phase.
social-analyticsFile ingest → ClickHouse writerupConsumes the exported files and populates the ClickHouse analytics store.
stats-service · also in EngagementAnalytics API for client/admin UIsupReads analytics via social-analytics (not ClickHouse directly).

Rewards

ServiceRoleStatusNotes
rewards-service · also in EngagementCopy-trading reward engineupSession lifecycle on startCopy/stopCopy, BullMQ jobs, month-end settlement, wallet + withdrawal.

Badges

ServiceRoleStatusNotes
badges-service · also in EngagementGamification · milestones → historyupConsumes badges.* (B-MAIL-V, B-MOB-V, B-POR-V, B-POI-V, B-DEMO-ACC, B-LIVE-ACC, B-TRADE-HIS, B-EA) on vhost /zulu.

Trading Flow

ServiceRoleStatusNotes
signal-junctionInbound signal entry + fan-outupSplits faulty from clean signals.
signal-synchronizerSignal repair / synchronizationupRepair shop for malformed signals.
signal-processorCore signal processingupApplies copy rules, risk caps, leader-copier fan-out.
act-signal-processorACT-specific signal transformerupNormalises to ACT broker order format.
act-bridge-signal-processorACT return-path handlerupProcesses fills back into the pipeline.
signal-outFinal router to execution venueup
persistence-servicePersistence / ClickHouse writerupOff-loads hot-path writes.

Bridges

ACT Bridge

ServiceRoleStatusNotes
act-bridgeAdapter to ACT brokersupSpeaks ACT native protocol.
order-updateWebSocket consumer for broker eventsupFills, partial fills, rejects; feeds back into the trading-flow pipeline.

node-middleware

ServiceRoleStatusNotes
node-middlewareBridge to terminal platforms (MT4, MT5, cTrader, Discord, Telegram)upSits between the signal pipeline and the actual trading terminals. Handles each platform's native protocol.

Admin

ServiceRoleStatusNotes
admin-expressAdmin HTTP gatewayupOrchestrates 12 backend services for CRM, support, and operations.
admin-mysqlNode DAL (not a DB)upMariaDB-backed DAL. Handles auth and verifies users and admins.
admin-serviceRoles / permissions / teamsupAccess-control domain logic.

Ops / Data Pipeline

ServiceRoleStatusNotes
data-pipelineBackend data migrationopsBash. Data migration pipeline; runs during cutover / catch-up windows.
data-pipeline-setupMigration environment setupopsBash. Companion to data-pipeline — prepares schema + connectivity.
import-historyHistorical analytics backfillopsNestJs. Referenced by stats-service via the analytics-importer trigger.

Infrastructure

ComponentLocationStatusNotes
MariaDBApplication Server (host process)upReplicates with separate peer machines.
PostgreSQLApplication ServerupSymbol-mapping db used by connector-hub.
RedisApplication Serverupredis-stack image.
RabbitMQ (on-host)Application Serveridle0 active connections — awaiting consolidation.
RabbitMQ (Social Trader legacy)external (legacy platform)primaryvhost /zulu — primary bus today.
RabbitMQ (ACT 281 broker)external (ACT platform)activevhost /act — order / fill events.
MongoDBApplication Server⚠ crash-loopAVX-less host — mongo:latest incompatible.
ClickHouseexternal analytics platformupAnalytical sink for stats-service and persistence-service.
↑ Back to overview