Transparency

How confidence scoring works

Your skill score only matters if it's trustworthy. Here's exactly how we verify that — no black boxes, no hidden algorithms. Everything is laid out below.

What We Track

During an assessment, the browser silently records six behavioral signals. These are never used to grade your answers — they only feed into the confidence layer that sits alongside your score.

Paste events−3 pts each · cap −30 max

How many times text was pasted into the answer field. A few is fine; pasting every answer is a red flag.

Tab switches−2 pts each (after 1st) · cap −20 max

Switching away from the assessment tab. The first switch is free — we know you might check the time.

Focus loss duration−1 pt / 10s (after 30s) · cap −15 max

Total time the browser window lost focus. Short breaks are fine; extended absence suggests external help.

Answer timing−3 pts each · cap −15 max

Answers submitted in under 20% of the expected time. Impossibly fast responses for a question's complexity.

Answer appeared at once−5 pts each · cap −20 max

The full answer materialized instantly rather than being typed character by character — classic paste or autofill.

Typing cadence−2 pts each · cap −10 max

Typing speed with nearly zero variance (robotic uniformity). Natural human typing has rhythm fluctuations.

How Scoring Works

Every assessment starts with a confidence score of 100. Penalties are subtracted based on the signals above. The final score maps to a level:

85–100
High

Strong evidence the work is genuinely yours.

60–84
Medium

Some signals flagged — still largely credible.

40–59
Low

Multiple flags suggest external assistance.

0–39
Review

Significant evidence of outside help.

Penalty caps ensure that a single category of behavior can't tank your entire confidence score. The worst possible score from any one signal type is capped — the full breakdown is shown in each signal card above.

Mobile Adjustments

Taking an assessment on a phone or tablet is a different experience. Auto-correct triggers paste events, notification banners cause focus loss, and typing cadence is inherently different on a touchscreen. We account for this:

  • Paste penalty reduced from −3 to −1 per event (autocorrect and predictive text often register as pastes).
  • Tab switch penalty reduced from −2 to −1 per event (mobile OS interruptions are more common).
  • Focus loss thresholds remain the same — extended absence is still meaningful on any device.

Mobile detection is based on user-agent and viewport width (<768px).

Server-Side Validation

Client-side signals can be spoofed. Our server independently validates the plausibility of the behavioral data before finalizing the confidence score.

Time consistency

Total client-reported answer time is compared against the server's wall-clock elapsed time since the assessment started. If the client claims more time than actually passed, a warning is flagged.

Impossibly fast text answers

Text answers (short answer, fix-code) longer than 30 characters but completed in under 2 seconds are automatically marked as "appeared at once" regardless of what the client reported.

Identical timing patterns

If all answers in an assessment have exactly the same time_taken_seconds, the server flags it — real humans don't answer every question in precisely the same duration.

Server-side checks can only reduce your confidence score — never increase it. They act as a safety net, not a second chance.

Known Limitations

No integrity system is perfect. We believe in being upfront about what ours cannot detect:

  • A second device (phone, tablet, laptop) used to look up answers — the browser has no visibility into other screens.
  • Screen mirroring or screen-sharing to a collaborator — these don't trigger tab-switch or focus-loss events.
  • Physical notes or reference material on your desk — no webcam, no proctoring.
  • Someone dictating answers to you verbally while you type naturally.
  • Sophisticated browser extensions that suppress or fake focus/paste events (though server-side timing checks partially mitigate this).

Vetted is designed for honest self-assessment — not high-stakes proctored exams. The confidence layer exists to make scores more meaningful, not to catch cheaters. If you genuinely want to know where you stand, the system works. If you want to game it, you can — but you're only fooling yourself.

Ready to see your real score?

Pick a topic and find out where you actually stand.

Browse topics