Skip to main content

Engines

An engine is a configured instance of a Nexio pattern. Each engine has an engine_type that determines its behavior:
Engine typeWhat it does
placementRanks offerings from multiple providers into optimal solutions
entity_analysisAnalyzes current state against requirements, finds gaps and recommendations
More engine types are planned. See the Headless Engine Setup guide for creating and configuring engines via API.

Runs

A run is an async evaluation submitted to POST /api/v1/engines/{engine_slug}/runs. Each run gets a run_id and is processed through the engine’s pipeline. Poll GET /api/v1/runs/{run_id} for results.

Input

input is the subject’s profile and evaluation context. What goes in input depends on your engine’s configuration and domain.
Field names like coverage_types and premium reflect the current API contract. Your engine’s configuration determines which input fields are relevant. See the Get Contract endpoint for your engine’s exact input schema.

Offerings

offerings is the array of provider offerings Nexio evaluates in a placement engine. Each offering represents one provider’s entry for one requirement category. If you have 3 providers each offering 3 categories, that’s 9 offerings.

Solutions

Nexio assembles offerings into ranked solutions — complete packages that satisfy the stated requirements. Solutions may combine offerings from a single provider or mix providers, whichever combinations score best. Each solution has a rank (1 = best) and a cluster_label that categorizes it: recommended, best_value, best_coverage, or simplest.

Engine Types

The engine you call determines the contract and the pipeline that runs.
Engine typeTypical payloadTypical result
placementinput plus top-level offeringsRanked solutions with scorecards
entity_analysisinput only, shaped by the engine contractRequirements, gaps, and profile summary
Use GET /api/v1/engines/{engine_slug}/contract when you need the exact field-level contract for a specific engine.

Solutions (detail)

A solution is a ranked package. Each includes:
  • offerings — the selected items in the package
  • provider_count — 1 = single provider, 2+ = mixed
  • est_cost_low / est_cost_high — annual cost range
  • scorecard — multi-dimension evaluation
  • rank and cluster_label
Provider identity is on solutions[*].offerings[*].provider_name, not on the solution root.

Ranking strategy

appetite_bucket controls how Nexio weights the scorecard dimensions when ranking solutions. Set it in input to express the priority:
appetite_bucketEmphasis
coverage_firstBroadest coverage, strongest offerings
cost_sensitiveLowest cost within quality bounds
simplicityFewest providers, easiest to accept
balancedEven weighting across all dimensions
When omitted, Nexio infers it from submission signals. The top-ranked solution’s label is returned in output.top_label and on each solution as cluster_label. Labels like recommended, best_value, best_coverage, and simplest reflect where each solution excels — they’re the output side of the ranking strategy.

Scorecards

Six dimensions, each rated L1–L4 (lower is better):
  • coverage_completeness — how completely the solution addresses requirements
  • pricing_competitiveness — cost relative to alternatives and budget
  • provider_quality — provider ratings and reputation
  • placement_likelihood — likelihood of successful acceptance
  • operational_simplicity — number of providers, complexity of execution
  • risk_alignment — fit between provider specialties and the subject’s profile
The package score is scorecard.overall_level — a weighted average based on the ranking strategy.
Dimension names like coverage_completeness reflect the default engine configuration. Custom engines can define their own scoring dimensions via the Update Config endpoint.

The pipeline

Placement

StageWhat happens
INTAKEParse submission, infer requirements
FILTERRemove offerings that fail hard constraints (region, category, capacity)
MATCHMatch remaining offerings to requirements by category
ASSEMBLEBuild candidate solutions with bundle discounts
EVALUATEScore across dimensions, rank, and label

Entity analysis

StepWhat happens
Infer RequirementsGenerate needs from the subject’s profile and context
Punch Card CheckCompare against current portfolio (MET / UNMET)
Deterministic ChecksRule-based gap detection
LLM AdequacyAI analysis of adequacy
Merge & DedupeCombine gaps, sort by severity (HIGH → MEDIUM → LOW)
Returns gaps with severity, recommendations, profile summary, and flags. See the Get Run Status response schema for full field details.

Rate limits

Rate limits are enforced on the public API. 429 Too Many Requests includes a Retry-After header. Contact support@usenexio.com for higher limits.