ACE Framework¶
The platform is structured around the ADAPTIVE Communication Environment (ACE) Reactor pattern. A single server binary is launched once per role (prepaid, billing, CDR loader, CRM, stats, bill formatter); the role is determined by the TOML configuration file passed via -i. Each role activates a different mix of ACE_Event_Handler subclasses and ACE_Acceptor instances on the shared ACE_Reactor::instance(). The same binary therefore expresses every subsystem; only configuration changes the runtime topology.
Component Topology¶
graph TB
subgraph Process["server process (one per role)"]
Reactor[["ACE_Reactor::instance()"]]
subgraph Handlers["Event handlers"]
Acceptor["Acceptor<PrepaidServer, SOCK_Acceptor><br/>(DIAMETER TCP)"]
CDRLoader["CDRLoader<br/>: ACE_Event_Handler<br/>(timer)"]
Billing["BillingHandler<br/>: ACE_Event_Handler<br/>(timer)"]
Stats["StatsHandler<br/>: ACE_Event_Handler<br/>(timer)"]
BillFmt["BillFormatterHandler<br/>: ACE_Event_Handler<br/>(timer)"]
CRMSrv["CRMServer<br/>: ACE_Svc_Handler<br/>(detached gRPC thread)"]
Sig["SignalHandler<br/>: ACE_Event_Handler<br/>(SIGINT/TERM/HUP)"]
end
subgraph Singletons["ACE_Singleton instances"]
UTILS[("UTILS<br/>(TOML + license)")]
REFDATA[("REFDATA<br/>(price plans, services,<br/>statuses, billcycles,<br/>currencies, taxes)")]
POOL[("THREAD_POOL<br/>(worker queue)")]
end
subgraph Workers["Per-handler worker threads"]
PrepThr["PrepaidServer threads<br/>(THR_DETACHED, 1 per CCR session)"]
BillThr["Billing worker threads<br/>(thread pool)"]
FmtThr["Bill formatter threads<br/>(thread pool)"]
LoaderThr["CDR loader thread<br/>(thread pool, atomic guard)"]
end
end
DB[("MySQL X DevAPI<br/>session per worker")]
Acceptor --> PrepThr
Reactor --> Acceptor
Reactor --> CDRLoader
Reactor --> Billing
Reactor --> Stats
Reactor --> BillFmt
Reactor --> CRMSrv
Reactor --> Sig
CDRLoader --> LoaderThr
Billing --> BillThr
BillFmt --> FmtThr
PrepThr --> DB
BillThr --> DB
FmtThr --> DB
LoaderThr --> DB
CRMSrv --> DB
Stats --> DB
REFDATA --> DB
PrepThr -.-> REFDATA
BillThr -.-> REFDATA
LoaderThr -.-> REFDATA
CRMSrv -.-> REFDATA
Sig -. "shutdown callback" .-> CRMSrv
Design Patterns in Use¶
Reactor + Acceptor + Service Handler¶
The prepaid module uses the canonical ACE triad: ACE_Reactor multiplexes the listening socket; Acceptor<PrepaidServer, _SOCK_Acceptor> accepts incoming TCP connections; each accepted connection spawns a PrepaidServer (an ACE_Svc_Handler<_SOCK_Stream, ACE_MT_SYNCH>) which calls activate(THR_DETACHED, 1, 0) to run its DIAMETER read loop in a dedicated thread. The handler self-destructs when the peer closes the connection.
The CRM gRPC server reuses the ACE_Svc_Handler lifecycle for symmetry with the prepaid module, even though gRPC owns its own listening socket. CRMServer::open() builds the grpc::Server, then activate() runs grpc_server_->Wait() in a detached thread. This gives the gRPC server the same registration, signal handling, and shutdown story as the rest of the platform — SignalHandler::registerShutdownCallback() invokes CRMServer::shutdown() with a 5-second deadline before the reactor stops, draining in-flight RPCs.
Timer-driven event handlers¶
CDRLoader, BillingHandler, StatsHandler, and BillFormatterHandler are scheduled with ACE_Reactor::instance()->schedule_timer(handler, nullptr, initial_delay, interval). Their handle_timeout is invoked on every tick. Each handler enqueues its actual work onto the singleton thread pool so the reactor thread never blocks on I/O. CDRLoader additionally guards against overlapping cycles with std::atomic<bool> is_processing.
Singleton¶
Three ACE_Singleton<T, ACE_Recursive_Thread_Mutex> typedefs provide process-wide shared state:
UTILS— TOML configuration (parsed once viatomlplusplus) and the activeLicenseValidator.REFDATA— In-memory caches of all reference tables (REF_Price_plan,REF_Service,REF_Resource,REF_Status,REF_Resource_status,REF_Event_status,REF_Billcycle,REF_voucher_status,REF_Currency,REF_Tax). Loaded at startup; reads guarded bystd::shared_mutex. The currency and tax maps drive display-time conversion and VAT stamping respectively — see Convergent Rating and Invoicing.THREAD_POOL— A C++20 thread pool with astd::queue<std::function<void()>>task queue andstd::condition_variablenotification, sized bysystem.thread_pool_size.
Adapter layer¶
diameter::DiameterAdapter keeps the DIAMETER protocol stack (include/diameter/) free of any billing::* types. Only the prepaid PrepaidServer::svc() loop binds the two: the adapter pulls MSISDN, B-number, service id, requested units, and used units out of a DiameterCCR; BalanceReserve::fromCCR() and Charge::fromCCR() factory methods produce the internal protobuf objects. The DIAMETER stack is delivered as a separate shared library (libdiameter.dylib) and is reusable from the bundled mock client in src/diameter/diameter_client.cpp.
Service handler thread isolation¶
Every worker thread that touches the database creates its own std::unique_ptr<DB_layer>. DB_layer holds a std::unique_ptr<mysqlx::Session>, so each thread owns its own X DevAPI session — there is no shared connection across threads. BillingHandler::handle_timeout instantiates one DB_layer to scan for activities; the worker spawned for each activity instantiates a second one inside the lambda body.
Concurrency Model¶
Threads¶
| Thread | Owner | Lifetime |
|---|---|---|
| Main / reactor | ACE_Reactor::instance()->run_reactor_event_loop() in main.cpp |
Process lifetime; exits when a shutdown signal is delivered. |
| Prepaid worker | PrepaidServer::activate(THR_DETACHED) per accepted connection |
One per DIAMETER session; exits on recv_n returning <= 0. |
| Thread-pool workers | THREAD_POOL::instance()->init(size) |
Process lifetime; size from system.thread_pool_size. |
| gRPC workers | gRPC's internal pool, owned by grpc::Server |
Built in CRMServer::open(); drained by Shutdown(deadline). |
Signal handling¶
SIGINT, SIGTERM, and SIGHUP are registered on the reactor via ACE_Sig_Set. CRMServer::setupGRPCServer() blocks these three signals with pthread_sigmask(SIG_BLOCK, ...) before BuildAndStart() so gRPC's worker threads inherit the mask and cannot swallow them, then restores the mask in the main thread so the reactor handler still receives them. SIGTSTP is intentionally excluded — it is a job-control signal whose default behaviour caused the entire signal handler to be de-registered.
Reference-data refresh¶
Refdata exposes refresh() to re-load every reference table inside a std::shared_mutex write lock; readers hold a shared lock while resolving price plans during rating. The platform does not auto-refresh — callers (currently CRM admin actions, marked TODO in refdata.hpp) must trigger refresh() explicitly after changing reference data.
Module Composition by Role¶
main.cpp reads the [modules] table from TOML and conditionally activates each subsystem. The same binary can host any combination, but the canonical deployment runs one process per role to isolate failures and to allow independent restarts.
| Role | TOML | Modules activated |
|---|---|---|
| Prepaid OCS | prepaid.toml |
prepaid |
| CDR loader | cdr_loader.toml |
cdr_loader |
| Billing (per cycle) | billing1.toml … billing4.toml |
billing (one process per billing.billcycle) |
| Bill formatter | bill_formatter.toml |
bill_formatter |
| CRM | crm.toml |
crm |
| Stats | stats.toml |
stats |
| Mass prepaid rating | prepaid_mass_rating.toml |
billing + prepaid_mass_rating (developer / load-test only) |
license/platform.lic is checked at startup by every role; failure of any of the five validation steps (signature, magic, hardware ID, expiry, LKGT) terminates the process before any module is initialised.
Generated Code and the gRPC Surface¶
proto/types.proto is the single source of truth for both the wire schema and the in-memory data model. CMake's add_custom_command invokes protoc with the C++ and gRPC plugins to produce types.pb.{h,cc} and types.grpc.pb.{h,cc} in ${CMAKE_BINARY_DIR}/generated. If protoc-gen-grpc_php_plugin is on the path, an additional generate_php_proto target writes PHP stubs into generated/php/ for consumption by crm2.micro.bss. The proto file defines the billing package — every internal data structure (CDR, Charge, Balance_reserve, Subscriber, Bill, Voucher, etc.) and every CRM RPC.