ZuluTrade CRM at a glance
A top-down walkthrough of every service that powers the ZuluTrade platform — from user sign-up, to broker connection, to live trade signal routing. Click any module card below to jump to its detailed breakdown.
Click any module to jump to its detailed section below. Each section lists its external integrations.
User Onboarding & Login
How a new user goes from zero to an authenticated session. The flow starts at client-express, writes a lead into legacy MariaDB, fires an OTP email via communication → Sendgrid, then (on OTP submit) creates the canonical user record in both MariaDB and Postgres and broadcasts a userRegister event on RabbitMQ. SSO logins skip the OTP step entirely.
Services
zulu3_biz, zulu3_session). Used only for login and legacy registration compatibility.zulu3 Postgres. All user lookups go here post-onboarding.zulu3_communication and dispatches via Sendgrid.Sign-up Sequence — Email + OTP
- Register request — FE submits the registration form to
client-express. - Lead created —
client-expresswrites a lead record into MariaDB viaclient-mysql. - OTP email —
client-expresstriggerscommunicationservice, which dispatches the OTP email through Sendgrid. - OTP verified — user submits the OTP;
client-express→client-mysqlcreates the user record in MariaDB. This record is later used byclient-expressto issue JWT tokens and manage login sessions / auth. - Postgres mirror —
client-expresssends a POST tousers-service, which creates the matching user record in the Postgreszulu3database. - Event broadcast —
users-servicepublishes auserRegisterevent on RabbitMQ so every downstream service can react. - ACT customer profile —
connector-hublistens to theuserRegisterevent and passes it on totemp-trader, which creates an ACT customer profile. On successful profile creation, anActCustomerevent is published on RabbitMQ;users-serviceconsumes it and stores the details in the Postgres user profile.
Sign-up Sequence — SSO (Google / Apple)
client-expressforwards the SSO request directly toauth-service.- The provider verifies email ownership. On success,
client-expresscreates the user if it does not already exist (skipping the OTP step — the provider has already verified the address) and issues the session without requiring a password.
Flavour / Country
On the registration form, flavour-service provides the country list and tells the FE which flavour ID is linked to the current domain, so the right white-label configuration and feature flags are applied.
zulu3_biz, zulu3_session) is the legacy store kept only for login & registration parity. All new user state should be written to Postgres zulu3. A full cutover is on the roadmap.
Account Connection
Once the user is registered & logged in, they must choose to be a Leader or Copier. FE asks connector-hub for available platforms: a Leader can connect MT4, MT5, ACT, Ctrade, Telegram or Discord; a Copier can connect MT4, MT5, ACT or Ctrade. connector-hub issues a unique ID, mounts the signal source on the relevant bridge (act-bridge / social-bridge), creates a mirror ACT account via temp-trader, and — for a Leader — sets up the Leader profile on FTL via leader-service.
How it works
After login, FE asks connector-hub for the platforms available to this user's role. Leader can pick from MT4, MT5, ACT, Ctrade, Telegram, Discord. Copier can pick from MT4, MT5, ACT, Ctrade.
Common steps for any platform:
1. connector-hub issues a unique account ID and forwards the connection request to the relevant bridge — social-bridge (node-middleware) for MT4/MT5/Ctrade/TG/Discord, act-bridge for ACT.
2. The bridge mounts the signal source and publishes state transitions on the sessions queue ({connector_connected, synced} progressing from false/false → true/false → true/true).
3. connector-hub consumes those state events. Once {connector_connected: true, synced: true} arrives, it requests temp-trader to create a mirror ACT customer account (for trade mirroring).
4. temp-trader creates the ACT account and publishes an ActCustomer event on RabbitMQ; users-service consumes it and stores the details in the Postgres user profile.
5. If the role is Leader, leader-service creates the Leader profile on FTL. If the role is Copier, copier-service wires the copy relationship.
How it works
1. Submit credentials — User submits MT4/MT5 credentials from FE. connector-hub issues an address and sends a connection request to social-bridge (node-middleware).
2. Initial state — social-bridge publishes an event on the sessions queue with { connector_connected: false, synced: false }. connector-hub consumes it to track state.
3. Pod creation — social-bridge creates an MT4/MT5 terminal pod on the K8s cluster and publishes { connector_connected: true, synced: false }.
4. Profile sync — social-bridge pings the MT container, pulls profile data, and publishes { connector_connected: true, synced: true }.
5. Mirror account — connector-hub requests an ACT account via temp-trader. On success, symbols are pulled from social-bridge and symbol mapping is performed.
6. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship to the chosen leader(s).
How it works
1. Credentials — User submits credentials from FE. connector-hub issues an address and forwards the request to social-bridge (Ctrade adapter).
2. OAuth login — social-bridge redirects the user through the Ctrader OAuth link. User authorizes access at Ctrader.
3. Authorized accounts list — On successful OAuth, social-bridge receives the list of authorized Ctrader accounts and returns it to the user.
4. User selects account — User picks which Ctrader account to mount.
5. Child address + connection request — connector-hub issues a child address for the selected account and sends the account connection request to social-bridge.
6. Sessions events (3 states) — social-bridge publishes three events on the sessions queue: {false,false} → {true,false} → {true,true}.
7. Mirror ACT account — On the final {connector_connected:true, synced:true} event, connector-hub requests a mirror ACT account from temp-trader. Symbols are then pulled and symbol mapping runs.
8. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship. Ctrade is available for both roles.
How it works
1. Credentials — User submits ACT credentials. connector-hub issues an address and sends a connection request to act-bridge.
2. Open ACT broker session — act-bridge opens the ACT broker session directly (no sessions-queue hand-off for ACT). On success, the session is handed back to connector-hub.
3. Mirror ACT account — connector-hub requests a mirror ACT account via temp-trader. Symbols are pulled and symbol mapping runs.
4. Role-specific step — If Leader, leader-service sets up the Leader profile on FTL. If Copier, copier-service wires the copy relationship.
Note: ACT is available for both Leader and Copier roles.
How it works
Leader-only path. Telegram is not available as a Copier source — copying flows still use MT/ACT/Ctrade for execution.
1. Connect handle — User provides their Telegram handle / bot token. connector-hub issues an address and forwards to social-bridge (Telegram adapter).
2. Sessions state — social-bridge publishes { connector_connected: false, synced: false }, attaches a TG bot / channel listener, then publishes { connector_connected: true, synced: true }.
3. Mirror ACT + Leader profile — temp-trader creates the mirror ACT account (so copiers can execute real trades off the Leader's TG signals); leader-service sets up the Leader profile on FTL.
How it works
Leader-only path. Discord (like Telegram) is not available for Copiers — their execution still uses MT/ACT/Ctrade.
1. Connect server — User provides their Discord user + server / channel. connector-hub forwards to social-bridge (Discord adapter).
2. Sessions state — social-bridge publishes { connector_connected: false, synced: false }, attaches to the Discord channel / webhook, then publishes { connector_connected: true, synced: true }.
3. Mirror ACT + Leader profile — temp-trader creates the mirror ACT account; leader-service sets up the Leader profile on FTL.
Services
sessions queue, and triggers the mirror-account + Leader/Copier wiring.sessions queue.ActCustomer events.ActCustomer events and stores the details in the Postgres user profile.Role availability
- Leader → MT4, MT5, ACT, Ctrade, Telegram, Discord.
- Copier → MT4, MT5, ACT, Ctrade only (social sources can't act as copy execution venues).
Copier Flow
How a user becomes a Copier — signup through subscription gating, leader selection, and start-copying activation, then the symmetrical closeout. This module is a narrative that leans on the other modules for detail; it doesn't re-describe services that already have a home elsewhere.
Not in this module: signal ingestion, trade execution, bridges, fills, persistence, analytics. Those live in Modules 05 (Trading Flow), 06 (Bridges), 07 (Analytics). See the Trading Flow module for what happens once a Copier is active.
Phase 1 · Account setup
User signs up or logs in. Detail lives in Module 01 Onboarding & Login — SSO flows (Apple / Google), OTP verification, profile writes by users-service, session issuance by auth-service. Not duplicated here.
Phase 2 · Broker connection
User connects a trading account. Detail lives in Module 02 Account Connection. The orchestrator is connector-hub; demo accounts route through temp-trader to S281. Live accounts go through the MT4 / MT5 / ACT / cTrader connectors.
Phase 3 · Subscription gating
The commercial layer. Copy-trading is gated by a subscription — free or paid.
- Paid path —
subscriptionsexposes plan / cart / payment endpoints. The user checks out through Stripe; a Stripe webhook triggers activation; the subscription is marked active and feature entitlements are written. - Free path — a background consumer inside
subscriptionsprocesses free-plan activation events. Passing eligibility → auto-activation; failing → user falls back to a paid plan. - Credits are allocated per plan; audit history is recorded; state lives in PostgreSQL.
Without an active subscription with the right entitlement, the Start Copying step (below) is blocked.
Phase 4 · Leader discovery / selection
The UI lists available Leaders. client-express fronts the requests; Leader records are fetched via leaders-service, which forwards to FTL. leaders-service owns no data — it's a thin HTTP gateway with logging / validation.
Phase 5 · Start copying
The actual activation. connector-hub orchestrates:
- Calls
copier-serviceto create the follower record — forwarded to FTL. - Publishes a
startCopyevent on RabbitMQ (Social Trader broker, vhost/zulu, exchangezulu). rewards-serviceconsumesstartCopy, creates a copy session in PostgreSQL, and schedules recurring reward jobs via BullMQ. Detail in Module 08 Rewards.notifications-servicesends a "you're now copying X" confirmation via the user's preferred channel.
From this point on, the Copier is active. Signals from the Leader's trades flow through Module 05 Trading Flow and Module 06 Bridges — not this module.
Phase 6 · Stop copying (closeout)
- User stops from UI →
connector-hub→copier-serviceremoves the follower record on FTL. connector-hubpublishesstopCopyon RabbitMQ.rewards-servicecloses the copy session. Background workers continue to process pending reward transactions, handle overdue / recovery, and run month-end settlement so earned rewards aren't lost even if the service was briefly down.- The subscription itself continues unless the user cancels / pauses separately via
subscriptionsAPIs. - If the subscription ends, copying also stops —
subscriptionspublishes the subscription-end event;connector-hubreacts by triggeringstopCopyfor the affected follower records.
Services touched in this lifecycle
| Service | Role in Copier Flow | Primary module |
|---|---|---|
connector-hub | Orchestrator (calls copier-service, publishes startCopy/stopCopy) | 02 |
copier-service | Follower CRUD → FTL | 02 |
subscriptions | Plan / cart / activation / renewal (Stripe) | inline here · breadcrumb in 10 |
rewards-service | Copy-session bootstrap on startCopy; session close on stopCopy | 08 |
notifications-service | User confirmations | 10 |
Leader Flow
How a user becomes a Leader — signup through broker connection, Leader application, admin approval, registration on FTL, and strategy setup. Unlike Copier Flow, Leader registration has a human in the loop: an admin reviews and approves each new Leader.
Not in this module: the trade signal publishing once a Leader is active, bridges, analytics. Leader trades flow through Module 05 Trading Flow.
admin-express + admin-service. No subscription is required — Leader participation is free-to-register and earning-based.
Phase 1 · Account setup
User signs up or logs in. Detail in Module 01 Onboarding & Login — SSO, OTP, profile writes. Not duplicated here.
Phase 2 · Broker connection
User connects the broker whose trades they want to publish. Detail in Module 02 Account Connection. Orchestrated by connector-hub; ACT-side connections go through temp-trader. Demo accounts aren't relevant here — Leaders need a live broker.
Phase 3 · Leader application
The user submits a Leader application from the client UI via connector-hub. The application creates a pending Leader record waiting for admin review.
Phase 4 · Admin approval
An admin reviews the application via admin-express. admin-service handles roles / permissions / approval audit on the admin side. The admin either approves or rejects:
- Rejected — the flow ends. The user is notified; no Leader record is created on FTL.
- Approved — the approval action triggers Phase 5. The audit record is retained in
admin-service.
Phase 5 · Leader registration + strategy setup
- On approval,
connector-hubcallsleaders-serviceto create the Leader record.leaders-serviceowns no state — it forwards to FTL, which holds the actual Leader record. connector-hubhandles leader / watchlist / strategy management (symbols, marketing description, copy settings — exact scope depends on how "strategy" is configured in the platform).notifications-servicesends a "you're now a Leader" confirmation.
Registration is synchronous — no RabbitMQ lifecycle events on the Leader side (unlike startCopy / stopCopy for Copiers).
Phase 6 · Active leader and closeout
Once registered, the Leader's trades become the source of signals. That flow lives in Module 05 Trading Flow. Earnings from copier activity accrue through Module 08 Rewards.
Closeout is two-path:
- Self-serve — user deregisters from the client UI →
connector-hub→leaders-service(remove) → FTL. - Admin-revoked — admin uses
admin-expressto revoke the Leader → same downstream path.
Services touched in this lifecycle
| Service | Role in Leader Flow | Primary module |
|---|---|---|
client-express | Application submission from user UI | 01 |
auth-service | Session + SSO | 01 |
users-service | Profile | 01 |
connector-hub | Orchestrator (calls leaders-service, owns strategy management) | 02 |
leaders-service | Leader CRUD → FTL | 02 |
temp-trader | ACT-side broker integration | 02 |
admin-express | Admin review + approval gateway | 11 |
admin-service | Roles / permissions / approval audit | 11 |
rewards-service | Accrues Leader earnings from copier activity | 08 |
notifications-service | Approval / rejection / removal notifications | 10 |
Trading Flow
The real-time pipeline that carries trade signals from every entry point — MT4/MT5 terminal, Telegram/Discord, Zulu Web (Demo and Act accounts), and ACT broker feeds — through the common Signal Processor and Signal Out services, out to the execution venues (FTL, ACT brokers, MT terminals). This is the performance-critical heart of the platform.
Signal Processor and Signal Out. MT & Social come in through Node Middleware → Signal junction → Signal Processor. Web orders enter through Act web trader → Signal Out. Broker-originated events (ACT / Fxview fills) enter via Order Update service over WebSocket and get bridged into RabbitMQ. Signal Processor calls FTL (s281) over HTTP for leader/copier execution; FTL pushes the resulting signals back over WebSocket to act-signal-processor, which publishes ftl-signal events consumed by Signal Out. Nothing on the hot path touches Postgres directly — state lives in Redis, every step is published to RabbitMQ, and the Persistence service consumes signal_orders / trade_logs and writes asynchronously to PGSQL. If Redis is lost the cache is rebuilt from the database.
How it works
Market order from Meta Terminal: Signal junction receives it and forwards to the platform-specific queue. Signal Processor executes the trade on the FTL account. Order updates come back from Act s281 over WebSocket into Signal Out, which acts on them. If the order is from a leader, act-signal-processor receives copier signals from FTL and pushes them to the signal.out RabbitMQ.
SL/TP add or update from Meta: Signal junction forwards to Signal Processor. We only update Redis + DB — SL/TP is not executed on the FTL account.
Close / partial close from Meta Terminal: Signal junction → Signal Processor → executed on the FTL account.
Pending orders (with or without SL/TP): stored and updated only in Redis + DB. Not executed on FTL until Meta fires the pending order — then Signal Processor runs the trade.
Orders from Zulu Web: come via WebSocket to the Act Trader system → pushed to RabbitMQ → consumed by Signal Out. Rest of the flow is the same as Meta orders.
Note: if a leader adds SL/TP, it is not applied to copier accounts.
How it works
Demo accounts are not connected to any external trading platform. The demo account lives on the FTL server (s281); all actions happen on the FTL account. Orders are placed only from Zulu Web.
Market / pending order from Zulu Web: via WebSocket to Act web trader → Signal Out. Signal Out publishes to RabbitMQ based on platform + leader/copier logic. For demo, Signal Processor consumes and executes on the FTL account.
Once the FTL account executes, order updates and copier signals (if any) come back. From here the flow is identical to the Meta flow.
SL/TP add / update, close, partial close from Zulu Web: executed on the FTL account and saved to Redis + DB.
Note: leader SL/TP is not applied to copiers.
How it works
The Act flow is similar to Meta, with one key difference: Meta uses Node Middleware, Act uses act-bridge. The rest is mostly the same with a few service swaps.
Order placed from Act Terminal: updates first hit Order Update service → pushed to RabbitMQ → consumed by Act-bridge-signal-processor → creates an event for Signal Processor → Signal Processor executes on the FTL account. Order updates come back from Act s281 via WebSocket to Signal Out. If the order is from a leader, copier signals are generated and pushed to signal.out.
SL/TP add / update from Act: Order Update → RabbitMQ → act-bridge processes logic → saves to Redis + DB. Not executed on the FTL account.
Close / partial close from Act Terminal: Order Update → RabbitMQ → act-bridge processes → Signal Processor → executed on the FTL account.
Pending orders (with SL/TP): stored / updated in Redis + DB via act-bridge-signal-processor only. When Act fires the pending order we get a signal and then execute on FTL.
Orders from Zulu Web: via WebSocket to the Act trader system → pushed to RabbitMQ → consumed by Signal Out → pushed again to RabbitMQ → consumed by act-bridge → executed on the connected broker platform (currently Fxview s245). Updates return through Order Update and continue via act-bridge-signal-processor, which processes the logic and saves to the database.
Pipeline Stages
- Ingestion — MT4/MT5 and TG/Discord orders enter via
Node Middleware. Web orders (Demo & Act accounts) enter viaAct web trader System. ACT platform and Fxview s245 broker events enter viaOrder Update serviceover WebSocket. - Classification —
Signal junctionroutes each MT/Social signal to a platform-specific queue (MT4,MT5,TG_CONNECTOR,DISCORD). Defect detection (price=0 for MT4/MT5/CTrade) writes adefect:<algo>_<order>key to Redis for the externalsignal-synchronizerto repair. - Processing —
Signal Processor(ts-zulu3.0-signal-processor) consumes the platform queues and the sharedACTqueue. It calls FTL s281 via HTTP (leader/copier/orderURLs) to execute leader-copier fan-out and publishessignal_orders+trade_logsto the persistence pipeline. - Return / Routing — FTL pushes resulting signals over WebSocket to
act-signal-processor, which publishesftl-signalon theftlqueue.Signal Outconsumes it and dispatches via platform publishers: MT copiers → routing keyoutonsignal.outqueue (consumed by Node Middleware → terminal); Demo/Act/TG/Discord copiers → routing keyACTon theACTqueue (looped back into Signal Processor); ACT broker dispatch →act signal outqueue →Act Bridge Service. - Web-order ack path — For orders placed via Zulu Web,
Signal Outalso publishes an immediate acknowledgement on routing keyOU<AccountID>(Order Update ack) and an RPC reply to the caller's reply queue, then ACKs the web-orders message last. - ACT execution feedback — ACT broker dealer sockets push
trade/orderevents toOrder Update serviceover WebSocket. Order Update republishes on thesignalDataexchange (routing keyorder).Act-bridge-signal-processorconsumes theact.orderqueue, normalises, and publishes back into Signal Processor via theACTqueue to keep state coherent. - Persistence —
Persistence servicebinds thepersistencequeue to thesignal_ordersandtrade_logstopics published by Signal Processor and Signal Out. It writes to PostgreSQL (signal_processor.signals,signal_processor.trade_logs) asynchronously, keeping the hot path DB-free. Redis holds the live state; if it is lost the cache is rebuilt from Postgres.
Services
defect:* key for the external signal-synchronizer to repair.signal_orders and trade_logs for persistence. Invalid or duplicate signals are rejected safely with traceable logs.ftl-signal (copy signals), ftl-execution, ftl-order, unregister — so Signal Processor and Signal Out react in real time.act.signal.out (market, limit, stop, trailing, pending, modify, cancel, close), resolves the broker's session token, and calls that broker's ACT HTTP API to actually execute. Notifies the user on failure.act.order queue, matches the ACT account to a user, reconciles order / trade IDs, handles opens, closes, pendings, and SL/TP changes, and publishes normalized messages back into Signal Processor's pipeline. Keeps Redis in sync and emits persistence events.order topic on signalData, and deposit/withdraw events to DPT_WDL on rabbit-social.persistence queue bound to signal_orders and trade_logs, applies INSERT / UPDATE / UPSERT / DELETE on the database (with conflict-safe updates and order-ID changes), and records execution history for trade logs. Processes updates per-order-ID to keep newer events from being overwritten by older ones.Bridges
Two bridge layers connect the Zulutrade signal pipeline to external execution venues. ACT Bridge speaks the native ACT broker protocol; node-middleware translates signals for MT4, MT5, cTrader, Discord, and Telegram terminals.
Sub-bridges
act-bridge is the outbound adapter speaking ACT brokers' native protocol. order-update is the inbound WebSocket consumer — fills, partial fills, rejects — feeding back into the trading-flow pipeline.Services
act-bridge and node-middleware; execution events flow back in via order-update. Every bridge here runs alongside the other Zulutrade services on the Application Server.
Analytics
How trading history becomes charts. Trades flow out of ACT-281 via the auth-service exports plugin, land on disk as files, get ingested and transformed by social-analytics into ClickHouse, and finally surface through stats-service to the client and admin UIs.
Services
social-analytics picks them up from there. The rest of auth-service's responsibilities (SSO, broker-auth, OTP, sessions) are covered in User Onboarding.stats-service queries here, not ClickHouse directly.social-analytics; enriches results with calls to connector-hub, users-service, temp-trader.auth-service. Planned work: lift it into a dedicated service so auth is just auth, and the exports path has its own release cadence and ownership. No action for this doc; noting it so future readers know the module's internal boundary will move.
Rewards
The platform's reward engine for copy-trading. Combines three responsibilities in one service: session lifecycle (from copy start/stop events), periodic reward-job execution (BullMQ workers creating reward transactions, handling overdue/recovery, running month-end settlement), and the payout workflow (wallet + withdrawal modules).
Responsibilities
- Session lifecycle — creates a copy session when a
startCopyevent arrives on RabbitMQ; closes it onstopCopy. - Reward calculation — schedules recurring reward jobs at configured intervals after session creation.
- Job execution — background BullMQ workers (Redis-backed) create reward transactions, handle overdue or recovery scenarios, and run month-end settlement so earnings stay consistent even after downtime.
- Payout workflow — wallet and withdrawal modules own the payout-side data that gets hit after rewards are earned.
- Public APIs — schedule/stop rewards manually, fetch reward transactions, fetch reward stats, and CRUD-style endpoints on wallet + withdrawal.
Service
/zulu). No external vendor touchpoints — Stripe lives in subscriptions, not here.client-express and admin-express call rewards-service for user-facing reward stats and admin read/management actions. auth-service also references it via the REWARDS_HOST environment variable (used for account-support utilities).
Badges
Gamification engine. Tracks user progress and awards badges when key profile or trading milestones happen. Validates incoming events, maps each topic to a badge rule, updates per-user state, marks badges earned, and writes a history entry for audit. Pure internal — no external vendor touchpoints.
Badge event topics
Known topics per the Confluence catalogue. Implementation status per topic not documented here — ignore.
Responsibilities
- Event intake + validation — pulls messages from
badges.*, validates payload structure. - Rule mapping — topic → specific badge rule; per-user progress updated in PostgreSQL.
- Earned marking — when a rule's condition is met, the badge is marked earned for the user.
- History entry — every change recorded for audit / display.
- Public APIs — run badge check directly, fetch all active badges (catalogue), fetch user-specific badges with optional account-level context.
Service
/zulu vhost). No external vendor deps.client-express (via BADGES_SERVICE_URL) and admin-express (per the Confluence admin-linked-services table).
auth-service / users-service for the verification events (B-MAIL-V, B-MOB-V, B-POR-V, B-POI-V), connector-hub for account events (B-DEMO-ACC, B-LIVE-ACC), and the persistence / trading pipeline for B-TRADE-HIS. Publishers for B-EA unknown. Flagged as assumption — not traced in code.
notifications-service consumes that event and delivers the corresponding notification through the user's configured channels (push, email, SMS, in-app). So the event emitted on badge-award is the handoff between gamification and the notification channel layer.
Other Services
A horizontal band of single-purpose services that support the product surface — notifications, billing, gamification, community, and analytics. All of them sit behind client-express and share the zulu3 Postgres database.
Services
stats-service and the social bridge layer.External Integrations
| Service | Provider | Purpose | Protocol |
|---|---|---|---|
| subscription-service | Stripe | Billing & recurring payments | HTTPS / Webhooks |
| communication | Sendgrid | Transactional email | HTTPS |
| auth-service | Apple · Google | Federated SSO | OAuth 2.0 |
| community-connector | Mastodon (self-hosted) | Social feed & follows | REST · ActivityPub |
Admin
The operator-facing side of the platform — a thin gateway (admin-express) fronts all CRM, support, and operational workflows, backed by a MariaDB-only DAL and a dedicated roles/permissions service. The admin tier reuses most of the client-tier backend rather than duplicating logic.
Admin-owned services
admin-mysql, client-mysql, admin-service (roles/permissions/teams), auth-service, badges-service, connector-hub, communication, notifications-service, rewards-service, subscriptions (subscription data), users-service (user list + detail), and temp-traders (trade history).admin-express whenever an operator action needs an authorization / access-control check.admin-service.
Ops / Data Pipeline
Background / ops-only services that don't sit on any request path. data-pipeline and data-pipeline-setup run data-migration workloads; import-history backfills historical trading data into the analytics store. These are tooling and plumbing, not product features.
Services
data-pipeline. Prepares environment, schema, and source/target connectivity before the main pipeline runs.stats-service) has complete coverage. Referenced by stats-service via the analytics-importer trigger path.Data & Infrastructure
Storage choices are deliberate. Every datastore has a single job and a clear owner.
| Store | Type | Owner | Purpose |
|---|---|---|---|
zulu3_biz, zulu3_session |
MariaDB | client-mysql | Legacy — login & registration parity only. |
zulu3_communication |
MariaDB | communication | Email templates, delivery logs, bounce tracking. |
zulu3 |
PostgreSQL | users-service · auth-service · most app services | Canonical relational store for users, subscriptions, leaders, copiers, badges, rewards. |
mastodon |
PostgreSQL | community-connector | Backing store for the self-hosted Mastodon instance (social feed). |
redis-stack |
Redis | Trading Flow services | Hot state for the signal pipeline. Also used by bull-board for job queues. |
clickhouse |
Clickhouse | social-analytics · persistence-controller | Column-store for analytical workloads — stats, leaderboards, time-series queries. |
Operational Tooling
Reality check — 2026-04-20
What the code assumes versus what's actually running today. Worth keeping in mind when reading the rest of the doc.
mariadbd running directly on the Application Server — not a container. Databases: zulu3_biz, zulu3_admin, zulu3_session, plus fxview_* variants. It's in a replication topology with separate peer machines. The client-mysql and admin-mysql containers are not databases despite the name — they're Node.js DAL services that expose HTTP and speak MariaDB-protocol outbound to MariaDB.
/ for Zulu, /act for ACT). Reality:
- The Social Trader (legacy) broker carries most Zulu traffic on vhost
/zulu— primary bus today. Consumed by connector-hub, subscriptions, users-service, rewards, badges, notifications, auth, community-connector, signal-synchronizer. - The ACT 281 broker carries order / fill events on vhost
/act. - A third broker referenced in some configs is confirmed unused — candidate for cleanup.
- The on-host RabbitMQ on the Application Server has zero active connections. Awaiting migration cutover.
mongo:latest is MongoDB 5.0+ which requires AVX, and the Application Server's CPU doesn't have it. The Mongo container has been crash-looping tens of thousands of times. Mongo-backed features in client-express and admin-express (affiliate, alerts, tbl_category collections) are broken today. Fix paths: pin to mongo:4.4, or migrate those collections onto the existing Postgres zulu3 database.
docker inspect*_PASSWORD values and RabbitMQ amqp://user:pass@… strings are inlined in container env vars — visible to anyone with Docker socket access on the Application Server. Worth Docker secrets or a mounted .env with restricted permissions. Out of scope for this doc, but worth filing.
Inventory
Every service declared in the project's docker-compose.yml, grouped by role, with runtime status (2026-04-20 snapshot). Hosts are named only by their canonical role.
Onboarding · Login
| Service | Role | Status | Notes |
|---|---|---|---|
client-express | Public HTTP gateway (client) | up | Consumes MariaDB, Redis, MongoDB (broken), DeepL, FXView pricing. |
admin-express | Admin HTTP gateway | up | Same stores as client-express. |
users-service | Canonical user identity | up | Owns user records. |
auth-service | Sessions / JWT, SSO | up | Apple + Google OAuth. |
client-mysql | Node DAL (not a DB) | up | HTTP facade over MariaDB. |
admin-mysql | Node DAL (not a DB) | up | HTTP facade over MariaDB. |
communication | Transactional email | up | Dispatches via Sendgrid. |
admin-service | Admin domain logic | up | |
flavours-service | Feature flags / white-label config | up |
Account Connection
| Service | Role | Status | Notes |
|---|---|---|---|
connector-hub | Central orchestrator | up | Talks to ActTrader, FTL, FXView; subscribes to multiple external RabbitMQ brokers. |
temp-trader | Demo accounts → S281 | up | |
leaders-service | Leader profiles | up | Integrates with FTL. |
copier-service | Copier subscriptions | up | Integrates with FTL. |
subscriptions | Billing & subscription lifecycle | up | Stripe integration. |
community-connector | Mastodon bridge | up | Single target — Mastodon instance. |
Engagement
| Service | Role | Status | Notes |
|---|---|---|---|
notifications-service | In-app / push fan-out | up | Consumes RabbitMQ events. |
rewards-service | Promotions, referral credits | up | |
badges-service | Gamification badges | up | |
stats-service | Per-user / per-leader stats | up | Uses external analytics source. |
Analytics
| Service | Role | Status | Notes |
|---|---|---|---|
auth-service · also in Onboarding | Exports plugin · ACT-281 → files | up | Plugin exports trades from ACT-281 to disk; to be extracted into its own service in a later phase. |
social-analytics | File ingest → ClickHouse writer | up | Consumes the exported files and populates the ClickHouse analytics store. |
stats-service · also in Engagement | Analytics API for client/admin UIs | up | Reads analytics via social-analytics (not ClickHouse directly). |
Rewards
| Service | Role | Status | Notes |
|---|---|---|---|
rewards-service · also in Engagement | Copy-trading reward engine | up | Session lifecycle on startCopy/stopCopy, BullMQ jobs, month-end settlement, wallet + withdrawal. |
Badges
| Service | Role | Status | Notes |
|---|---|---|---|
badges-service · also in Engagement | Gamification · milestones → history | up | Consumes badges.* (B-MAIL-V, B-MOB-V, B-POR-V, B-POI-V, B-DEMO-ACC, B-LIVE-ACC, B-TRADE-HIS, B-EA) on vhost /zulu. |
Trading Flow
| Service | Role | Status | Notes |
|---|---|---|---|
signal-junction | Inbound signal entry + fan-out | up | Splits faulty from clean signals. |
signal-synchronizer | Signal repair / synchronization | up | Repair shop for malformed signals. |
signal-processor | Core signal processing | up | Applies copy rules, risk caps, leader-copier fan-out. |
act-signal-processor | ACT-specific signal transformer | up | Normalises to ACT broker order format. |
act-bridge-signal-processor | ACT return-path handler | up | Processes fills back into the pipeline. |
signal-out | Final router to execution venue | up | |
persistence-service | Persistence / ClickHouse writer | up | Off-loads hot-path writes. |
Bridges
ACT Bridge
| Service | Role | Status | Notes |
|---|---|---|---|
act-bridge | Adapter to ACT brokers | up | Speaks ACT native protocol. |
order-update | WebSocket consumer for broker events | up | Fills, partial fills, rejects; feeds back into the trading-flow pipeline. |
node-middleware
| Service | Role | Status | Notes |
|---|---|---|---|
node-middleware | Bridge to terminal platforms (MT4, MT5, cTrader, Discord, Telegram) | up | Sits between the signal pipeline and the actual trading terminals. Handles each platform's native protocol. |
Admin
| Service | Role | Status | Notes |
|---|---|---|---|
admin-express | Admin HTTP gateway | up | Orchestrates 12 backend services for CRM, support, and operations. |
admin-mysql | Node DAL (not a DB) | up | MariaDB-backed DAL. Handles auth and verifies users and admins. |
admin-service | Roles / permissions / teams | up | Access-control domain logic. |
Ops / Data Pipeline
| Service | Role | Status | Notes |
|---|---|---|---|
data-pipeline | Backend data migration | ops | Bash. Data migration pipeline; runs during cutover / catch-up windows. |
data-pipeline-setup | Migration environment setup | ops | Bash. Companion to data-pipeline — prepares schema + connectivity. |
import-history | Historical analytics backfill | ops | NestJs. Referenced by stats-service via the analytics-importer trigger. |
Infrastructure
| Component | Location | Status | Notes |
|---|---|---|---|
| MariaDB | Application Server (host process) | up | Replicates with separate peer machines. |
| PostgreSQL | Application Server | up | Symbol-mapping db used by connector-hub. |
| Redis | Application Server | up | redis-stack image. |
| RabbitMQ (on-host) | Application Server | idle | 0 active connections — awaiting consolidation. |
| RabbitMQ (Social Trader legacy) | external (legacy platform) | primary | vhost /zulu — primary bus today. |
| RabbitMQ (ACT 281 broker) | external (ACT platform) | active | vhost /act — order / fill events. |
| MongoDB | Application Server | ⚠ crash-loop | AVX-less host — mongo:latest incompatible. |
| ClickHouse | external analytics platform | up | Analytical sink for stats-service and persistence-service. |