How confidence scoring works
Your skill score only matters if it's trustworthy. Here's exactly how we verify that — no black boxes, no hidden algorithms. Everything is laid out below.
What We Track
During an assessment, the browser silently records six behavioral signals. These are never used to grade your answers — they only feed into the confidence layer that sits alongside your score.
How many times text was pasted into the answer field. A few is fine; pasting every answer is a red flag.
Switching away from the assessment tab. The first switch is free — we know you might check the time.
Total time the browser window lost focus. Short breaks are fine; extended absence suggests external help.
Answers submitted in under 20% of the expected time. Impossibly fast responses for a question's complexity.
The full answer materialized instantly rather than being typed character by character — classic paste or autofill.
Typing speed with nearly zero variance (robotic uniformity). Natural human typing has rhythm fluctuations.
How Scoring Works
Every assessment starts with a confidence score of 100. Penalties are subtracted based on the signals above. The final score maps to a level:
Strong evidence the work is genuinely yours.
Some signals flagged — still largely credible.
Multiple flags suggest external assistance.
Significant evidence of outside help.
Penalty caps ensure that a single category of behavior can't tank your entire confidence score. The worst possible score from any one signal type is capped — the full breakdown is shown in each signal card above.
Mobile Adjustments
Taking an assessment on a phone or tablet is a different experience. Auto-correct triggers paste events, notification banners cause focus loss, and typing cadence is inherently different on a touchscreen. We account for this:
- ●Paste penalty reduced from −3 to −1 per event (autocorrect and predictive text often register as pastes).
- ●Tab switch penalty reduced from −2 to −1 per event (mobile OS interruptions are more common).
- ●Focus loss thresholds remain the same — extended absence is still meaningful on any device.
Mobile detection is based on user-agent and viewport width (<768px).
Server-Side Validation
Client-side signals can be spoofed. Our server independently validates the plausibility of the behavioral data before finalizing the confidence score.
Total client-reported answer time is compared against the server's wall-clock elapsed time since the assessment started. If the client claims more time than actually passed, a warning is flagged.
Text answers (short answer, fix-code) longer than 30 characters but completed in under 2 seconds are automatically marked as "appeared at once" regardless of what the client reported.
If all answers in an assessment have exactly the same time_taken_seconds, the server flags it — real humans don't answer every question in precisely the same duration.
Server-side checks can only reduce your confidence score — never increase it. They act as a safety net, not a second chance.
Known Limitations
No integrity system is perfect. We believe in being upfront about what ours cannot detect:
- ●A second device (phone, tablet, laptop) used to look up answers — the browser has no visibility into other screens.
- ●Screen mirroring or screen-sharing to a collaborator — these don't trigger tab-switch or focus-loss events.
- ●Physical notes or reference material on your desk — no webcam, no proctoring.
- ●Someone dictating answers to you verbally while you type naturally.
- ●Sophisticated browser extensions that suppress or fake focus/paste events (though server-side timing checks partially mitigate this).
Vetted is designed for honest self-assessment — not high-stakes proctored exams. The confidence layer exists to make scores more meaningful, not to catch cheaters. If you genuinely want to know where you stand, the system works. If you want to game it, you can — but you're only fooling yourself.