Your assessment is only as good as the person actually taking it

Active Integrity Probing™ detects AI-assisted responses during the conversation itself — in real time, on every plan.

Live Assessment — Callback Probe
“You mentioned your enterprise discovery framework — how would you apply it here specifically?”
“Well... I think I mentioned something around understanding needs first...”

⚠ Integrity Signal

Response latency: 8.4s (baseline: 2.1s)

Recall specificity: LOW

Confidence markers: absent

Integrity confidence0.31

Passive guardrails can watch the screen. They can't interrogate the answer.

Where generic AI assessments break
What looks fine on the surface
The answer sounds polished, on-topic, and grammatically clean.
What you still don't know
Whether the candidate generated it themselves or stitched it together with outside AI help.
Why Miki is different
Because Miki controls the conversation, it can challenge the answer inside the moment where the signal matters.
Probe pressure in practice
Candidate gives a polished framework answer.
Miki asks them to restate it in a new context.
Latency spikes, specificity drops, and the confidence language disappears.
Baseline latency: 2.1s
Probe latency: 8.4s
Register shift: detected
Human review: recommended
Integrity signal strengthActionable

Miki tests ownership of the answer while the conversation is still live

Assessment Timeline — Probe Firing

Turn 1–3: Normal conversation●──●──●
Turn 4: Callback probe●──⚡──●[8.4s]
Turn 5–6: Normal conversation●──●──●
Turn 7: Pivot probe●──⚡──●[6.2s]
Turn 10: Paraphrase probe●──⚡──●[7.1s]

Integrity statusReview Recommended

Probe library
01 — Callback probeLive
Asks the candidate to reuse a claim they made earlier and apply it specifically.
02 — Pivot probeLive
Changes context quickly to test whether the response still sounds owned and coherent.
03 — Paraphrase probeLive
Requests the idea again in different language to catch generated phrasing and low recall.
04 — Contradiction probeLive
Applies light tension against an earlier answer to test consistency under pressure.

A confidence readout that still keeps a human in the loop

High confidence

0.94 — Verified ✓

All probe responses stayed within baseline and no anomaly signatures were detected.

Latency stable
Register stable
Recall specificity: strong
Review urgency: low
Review recommended

0.31 — Review ⚠

Latency rose +340% on callback probes, with a clear linguistic shift between turns 7 and 8.

Callback anomaly: high
Register shift: detected
Confidence markers: absent
Review urgency: high
How teams use the signal
  • Timing analysis against each candidate's own baseline, not an arbitrary fixed threshold
  • Linguistic coherence and behavioral consistency checks layered into the same report
  • Human review remains the final decision point when a candidate lands in the gray zone

All modalities, all plans

Chat

Keystroke timing and linguistic probes catch assisted writing in the channel where text risk is highest.

Voice + Video

Speech onset, prosody, and live contradiction checks add another layer of integrity signal in spoken assessments.

CapabilityChatVoiceVideo
Keystroke timing analysis
Linguistic register monitoring
Prosodic/speech onset analysis
Full probe suite (callback/pivot/paraphrase/contradiction)