Engines
An engine is a configured instance of a Nexio pattern. Each engine has anengine_type that determines its behavior:
| Engine type | What it does |
|---|---|
placement | Ranks offerings from multiple providers into optimal solutions |
entity_analysis | Analyzes current state against requirements, finds gaps and recommendations |
Runs
A run is an async evaluation submitted toPOST /api/v1/engines/{engine_slug}/runs. Each run gets a run_id and is
processed through the engine’s pipeline. Poll GET /api/v1/runs/{run_id} for results.
Input
input is the subject’s profile and evaluation context. What goes in input depends on your engine’s configuration and domain.
Field names like
coverage_types and premium reflect the current API contract. Your engine’s configuration determines which input fields are relevant. See the Get Contract endpoint for your engine’s exact input schema.Offerings
offerings is the array of provider offerings Nexio evaluates in a placement engine. Each offering represents one provider’s
entry for one requirement category. If you have 3 providers each offering 3 categories, that’s 9 offerings.
Solutions
Nexio assembles offerings into ranked solutions — complete packages that satisfy the stated requirements. Solutions may combine offerings from a single provider or mix providers, whichever combinations score best. Each solution has arank (1 = best) and a cluster_label that categorizes it:
recommended, best_value, best_coverage, or simplest.
Engine Types
The engine you call determines the contract and the pipeline that runs.| Engine type | Typical payload | Typical result |
|---|---|---|
placement | input plus top-level offerings | Ranked solutions with scorecards |
entity_analysis | input only, shaped by the engine contract | Requirements, gaps, and profile summary |
GET /api/v1/engines/{engine_slug}/contract when you need the exact field-level contract for a
specific engine.
Solutions (detail)
A solution is a ranked package. Each includes:offerings— the selected items in the packageprovider_count— 1 = single provider, 2+ = mixedest_cost_low/est_cost_high— annual cost rangescorecard— multi-dimension evaluationrankandcluster_label
solutions[*].offerings[*].provider_name, not on the solution root.
Ranking strategy
appetite_bucket controls how Nexio weights the scorecard dimensions when ranking solutions.
Set it in input to express the priority:
appetite_bucket | Emphasis |
|---|---|
coverage_first | Broadest coverage, strongest offerings |
cost_sensitive | Lowest cost within quality bounds |
simplicity | Fewest providers, easiest to accept |
balanced | Even weighting across all dimensions |
output.top_label and on each solution as
cluster_label. Labels like recommended, best_value, best_coverage, and simplest
reflect where each solution excels — they’re the output side of the ranking strategy.
Scorecards
Six dimensions, each rated L1–L4 (lower is better):coverage_completeness— how completely the solution addresses requirementspricing_competitiveness— cost relative to alternatives and budgetprovider_quality— provider ratings and reputationplacement_likelihood— likelihood of successful acceptanceoperational_simplicity— number of providers, complexity of executionrisk_alignment— fit between provider specialties and the subject’s profile
scorecard.overall_level — a weighted average based on the ranking strategy.
Dimension names like
coverage_completeness reflect the default engine configuration. Custom engines can define their own scoring dimensions via the Update Config endpoint.The pipeline
Placement
| Stage | What happens |
|---|---|
| INTAKE | Parse submission, infer requirements |
| FILTER | Remove offerings that fail hard constraints (region, category, capacity) |
| MATCH | Match remaining offerings to requirements by category |
| ASSEMBLE | Build candidate solutions with bundle discounts |
| EVALUATE | Score across dimensions, rank, and label |
Entity analysis
| Step | What happens |
|---|---|
| Infer Requirements | Generate needs from the subject’s profile and context |
| Punch Card Check | Compare against current portfolio (MET / UNMET) |
| Deterministic Checks | Rule-based gap detection |
| LLM Adequacy | AI analysis of adequacy |
| Merge & Dedupe | Combine gaps, sort by severity (HIGH → MEDIUM → LOW) |
Rate limits
Rate limits are enforced on the public API.429 Too Many Requests includes a Retry-After header.
Contact support@usenexio.com for higher limits.