Technical details

Filters, stages, and the shape of an event rule.

CENT's customizing is small enough to fit in your head. Three nested entities (Project → Object → Event), a filter row with five interesting columns, and a queue with five interesting states. This page walks through each of them in the order they matter at runtime.

The data model

Three nested entities, owned by one project.

Read the customizing tables top-down and the meaning falls out:

Project          (zcent_prj)         # MYHUB, PRICING_COCKPIT_AUTOMATION, REA_REPLICATION, …
 └── Object      (zcent_prjobj)     # MAT_FULL, COND_A, EINKBELEG, KRED, WLK1, …
      ├── Filter Values  (zcent_fltv)  # DISPLAY, MMSTA_NOT_30, KHTMACHINE, PRODH_00001, …
      └── Event     (zcent_objevn)    # MYHUB_AT002_RPT_TO_BE_REPEATED, MYHUB_AT011_NATIONAL_PRICE, …
            └── Filter (zcent_objevnf) # one row per condition the event must satisfy

Receiver Group     (zcent_grp)         # exactly one per project, named like the project
 ├── Consumers     (zcent_grpcns)     # registered ABAP class implementations
 ├── Filter Values (zcent_grpfv)      # group-scoped overrides like MY_DC, ARTICLE_TYPE
 └── Object/Event  (zcent_grpobj/grpevn) # with Retry Mode + Retry Count per event
The filter row

The whole rule engine boils down to this row.

Each Event owns a list of filter rows in zcent_objevnf. Every row is one condition a captured change has to satisfy. They're ANDed together. The five columns that make a filter expressive are Filter Value, Data Source Stage, Table Type, Table Name, and Field Name.

Filter Description Group Stage Table Type Table Name Field Filter Value
ARTICLE_CATEGORY Article Category 0 Main M Merged Tables MYHUB_MAT_FULL ATTYP DISPLAY
DC_ARTICLE_STATUS DC Article Status 0 Old M Merged Tables MYHUB_MAT_FULL MMSTA STATUS_LIST_AT002
STATUS_CHANGE DC Article Status Change 0 Updated M Merged Tables MYHUB_MAT_FULL MMSTA STATUS_21
RP_TYPE RT Type on DC level 0 Main Change Document Tables MARC DISMM RP_TYPE
WLK1_EXISTS Listing Check 0 Main M Merged Tables MYHUB_MAT_FULL WLK1_EXISTS NOT_EMPTY

Five rows. Together they say: "fire this event when an article in DISPLAY category, that has a listing, transitions from one of the AT002 status values to status 21, with RP-Type configured." No code.

Data Source Stages

How a filter "sees" the change.

A change-document row has a before-image and an after-image. Different filters care about different views of the same row. The Data Source Stage dropdown picks which view this filter looks at.

Main — "the change happened, in any direction" (New OR Additional OR Deleted)
New — the after-image only
Old — the before-image only
Updated — Old <> New AND Old <> space (a real value-to-value change)
Inserted — a row that didn't exist before
Deleted — a row that exists in the before-image but not after
Edited — Update OR Inserted (with optional X-structure variant for low-level reads)
Additional — a row CENT pulled in from a related table during merge

The Table Type dropdown sits next to Stage and selects the data source the filter reads from: Change Document Tables (MARA, MARC…), View Tables (the one the project defined), Merged Tables (the joined view CENT builds at runtime), or System Tables.

The bgRFC queue

Five states. No scheduler.

Once a captured row enters zcent_cdqueue, it walks through one of these states. ZCENT_CDQUEUE_PROC is the live view of the queue.

QUEUED
Awaiting
A worker hasn't picked the unit up yet. The default state on insert.
PROC
Processing
A worker has the unit and is calling the consumer class.
DONE
Forwarded
Consumer returned without exception. CDHDR row marked FORW.
FAIL
Errored
Consumer raised. If retries remain, transitions to RETRY; otherwise stays FAIL.
RETRY
Re-queueing
Bumped back to QUEUED with retryNo + 1. CDHDR row marked REPRO when it eventually succeeds.
Why not just a job?

For comparison: the same problem, the old way.

Job-based polling

Nightly / hourly batch

  • Scheduler config + monitoring + on-call coverage
  • Latency = next run window (up to N hours stale)
  • Re-reads CDPOS from disk every cycle, even when nothing changed
  • Per-downstream re-implementation of capture logic
  • Single point of failure: one missed run = one missed event
  • Hard to scale: bigger window = bigger query, not more parallelism
CENT (real-time, bgRFC)

Capture at save, dispatch in parallel

  • No scheduler. Capture is part of the user's save.
  • Latency measured in hundreds of milliseconds.
  • Read happens once, shared by every downstream.
  • Capture lives in customizing, not in code.
  • Failed dispatches replay automatically per event Retry Mode.
  • Throughput scales by adding workers to the bgRFC pool.

Walk through it yourself

ZCENT_SET shows the Dialog-Structure tree. Click around the projects, objects, and events to see this whole model in motion.