When people can't check out:
Redesigning Accountability in a Secure Military Environment
Redesigning Accountability in a Secure Military Environment
Role: UX Designer
Context: Government / Defence (NDA‑safe)
OUTCOME
Check in/out completion rate
MODEL CHANGE
Accountability owners in system
In secure military environments, escort officers are held accountable for visitor compliance — but the system gave them no way to act on it. When visitors forgot to check out (and they always did), the existing safeguard was a midnight auto-checkout at 2359H. It didn't solve the problem. Visitors doing legitimate late-shift or overnight work got automatically checked out mid-task, generating false audit records. The fix was corrupting the data it was meant to protect.
The root issue wasn't UX friction. It was a broken accountability model: the system was designed around visitors, but the consequences of failure fell on someone else entirely.
I argued for removing the auto-checkout entirely and reframing the system around shared responsibility — giving escort officers real-time visibility and direct check-out authority. Stakeholders raised a fair concern: if visitors forget, what stops escort officers from forgetting too? The answer wasn't more automation. It was giving the right people the right tools, and making accountability explicit. Check-in/out completion rate increased by 36% — sessions that had previously gone unresolved were now being completed accurately. The midnight auto-checkout and the logbooks stayed in the drawer.
Reframed the problem from UX to accountability. The brief focused on improving check-out reminders for visitors. Field research showed the issue was a misassigned accountability model, not a reminder problem. I pushed for a model change, not a nudge.
Argued for removing the auto-checkout entirely. Rather than patching a safeguard that was producing false records, I proposed eliminating it and transferring ownership to escort officers. This met pushback, which I addressed by reframing who was already accountable in the physical world.
Held the line on timestamp integrity. POs requested the ability to edit timestamps for administrative convenience. I pushed back on audit integrity grounds. The restriction stayed and was not revisited post-launch.
Flagged and formally documented the 10-minute accuracy trade-off. In critical zones, I identified a consistent gap between digital check-in and physical site arrival. I escalated this to POs and compliance stakeholders before launch. We accepted it formally — it raised no audit issues post-launch.
Accountability Without Visibility - and a safeguard making things worse
Escort officers are responsible for security and audit compliance at each site. Yet the existing system relied almost entirely on visitors remembering to check out — a dependency that consistently failed in practice.
Visitors forgetting to check out — at end-of-shift, when attention shifts, sessions were routinely abandoned.
Browser refreshes resetting session state — a page reload mid-task could wipe progress, forcing visitors to restart or abandon entirely.
Poor connectivity in restricted zones — signal degraded inside controlled spaces, making digital flows unreliable.
Manual fallback to physical logbooks — when digital flows broke, officers reverted to paper, fragmenting the audit record.
To compensate, a midnight auto-checkout at 2359H had been introduced as a safety net. In practice, it created a new class of error: visitors legitimately on-site past 2300H — doing overnight or late-shift work — were automatically checked out mid-task. The system was generating false audit records in the same data it was designed to protect.
"I can't possibly chase everyone to make sure they check out."
— Escort Officer, field research session
Escort officers were accountable for outcomes they couldn't influence. The system treated them as passive observers. The result was inaccurate records, audit risk, and operational stress — not from negligence, but from a fundamentally misdesigned model.
Why the System Failed — and Why Reminders Wouldn’t Fix It
I conducted field studies across multiple military compounds and mapped end-to-end journeys for both visitors and escort officers. Three patterns emerged consistently.
Failures occurred at predictable moments: end-of-shift when attention shifts, in low-connectivity zones where digital flows degraded, and when escorts managed multiple contractors simultaneously. The pattern was consistent enough to be treated as a design constraint, not an edge case.
Implication
Any system that relies on visitors to remember check-out will fail in this environment. Responsibility must be distributed.
Escort officers physically controlled who entered, which rooms were accessed, and when work was completed. In the real world, they were already the accountability owners. The system gave them zero visibility or authority to match.
Implication
Escort officers weren't edge-case users. They were the primary users the system had been ignoring.
Camera phones were prohibited in some zones. Devices were shared. Signal degraded inside controlled spaces. Any solution that assumed visitor-owned, camera-enabled, always-connected devices would fail for the same structural reasons as the original system.
Implication
Design for zero-device moments as the baseline, not the exception.
Operational environments were segmented into Safe, Caution, and Critical zones, each with different device and security constraints:
Safe zones
Visitors and escort officers can carry camera-enabled phones.
Impact: Full digital flow possible for all parties
Caution zones
Escort officers may carry camera-enabled phones, while visitors must store theirs in lockers and use non-camera devices.
Impact: Visitor self-service degraded; escort-assisted flows essential
Critical zones
No phones are permitted for either visitors or escort officers.
Impact: All digital check-in must occur at entrance before proceeding
In critical zones, both parties store devices at entrance and proceed on foot to the worksite — creating a consistent ~5 to 10 minutes gap between digital check-in and physical arrival, depending on location. This was flagged early, reviewed with POs and compliance stakeholders, and formally accepted: the gap was predictable, did not affect audit integrity, and eliminating it would introduce greater operational friction than the discrepancy itself.
Rather than patching the existing model with more reminders or rules, I reframed the system around who actually holds responsibility in the physical world. The midnight auto-checkout was removed. In its place: escort officers gained real-time visibility and direct check-out authority.
Stakeholders raised a legitimate concern: without the auto-checkout safety net, what would prevent escort officers from forgetting too? The response was that this framed the wrong problem. Escort officers weren't passive bystanders prone to forgetting — they were already physically managing site access and visitor presence. The gap wasn't attention or motivation. It was tools. Giving them system visibility to match their real-world authority made the auto-checkout redundant, not just inconvenient.
Escort officers can now view all assigned visitors and their real-time check-in status, assist with individual or bulk check-out, and immediately identify unresolved or abandoned sessions. System visibility now matches real-world responsibility.
Visitor sessions survive browser refreshes, preventing one of the most common causes of silent check-out failure. Duplicate check-ins across locations are blocked. The midnight auto-checkout — which was generating false records — is gone.
Actions are scoped to the entrance an escort officer is physically checked into. This prevents accidental cross-location manipulation while reinforcing the same physical boundaries that already exist in the real world.
Task completion rate across all test scenarios
Ease of use ratings out of 10
Escort officers across moderated sessions
The escort-assisted check-out model was universally accepted. No friction was reported around the authority transfer - officers found the expanded control intuitive given their existing real-world role.
Retrospective login without prior check-in
Insight: Officers expected visibility to reflect real-time authority
Response: Only checked-in entrances shown; clear refresh prompt added
Uncertainty about own checked-in state
Insight: “I'm unsure if I'm checked in." - escort officer
Response: Instructional banners and state messaging redesigned to surface next action clearly
Concerns around timestamp manipulation
Insight: Trust in audit trail required visible integrity signals
Response: Timestamps locked under visitor name; audit trail annotation clarified
What I'd do differently:
The reframe from visitor-UX to accountability-model was the right call — but it came later than it should have. I began research within the existing brief frame rather than challenging it first. A single stakeholder conversation in week one, focused on who actually bears the consequence of system failure, would have sharpened the research questions significantly and potentially surfaced the auto-checkout issue earlier in the process.
I'd also push harder for a post-launch review with access to real audit log data. The 36% completion rate figure came from usability testing, not production. Understanding whether the escort-assisted model held under real operational load — shift handovers, high-visitor days, concurrent incidents — would tell me something the test environment couldn't replicate.