Emergency Passports: Discovery
Client: Foreign Commonwealth and Development Office (FCDO) | Agency: CYB
4 months | 5 people
Overview
As part of its Enabling Emergency Travel group, the FCDO provides British Citizens the ability to apply for an Emergency Passport (EP) - a large, high volume service.
A British Citizen can apply for this if they are overseas, need to travel urgently and cannot get a full British passport in time.
After successful feature builds for the previous sprint, I convinced the client for us to conduct a discovery into the entire process to look at the whole picture and identify the most inefficient parts of the processing.
A new and improved service would mean people would get the help they need quicker and easier, while also reducing workload for government teams.
Objectives
To make processing EP’s by internal staff more cost effective and efficient by identifying data-driven and evidence-based changes.
The team + my role
Lead User Researcher (myself), Delivery Manager, Business Analyst, Data Analyst and Developer.
I planned and conducted the research abroad, analysed the insights, quantified time on task, prioritised the most inefficient parts of the process together with business criticality, and communicated research results back to the client using research artefacts.
I collaborated closely with the Business and Data Analyst to calculate time on task, resource cost, and therefore the cost of the problems.
Planning the Research
Identifying the problem
The government was on a mission to save costs wherever possible, and had identified EP processing to be a major contender for investigation. We were told the internal systems were inefficient (especially during crisis periods), but we did not know which parts, why, or to what extent. Additionally, we did not know the psychological safety of the staff processing the documents, the operational aspects of the job or how other departments affected their work. What did the whole problem space look like?
Deciding on research methods
To understand the whole process, I flew to Madrid with my team for 4 days to observe staff using the systems. This helped me identify their online and offline actions, visit other departments and probe further through interviews. The methods I used were:
Quantitative research through identifying elapsed time between tasks
Contextual inquiry (observing the entire processing flow)
Interviews with staff, senior management and external consular staff
Identifying major problem areas through data triangulation
I used quantitative data from a performance funnel to pinpoint parts of the process with high elapsed time. This, combined with existing data from interviews, allowed me to create a data-driven process flow to show to my team and stakeholders.
It made a complex journey easier to understand, got everyone on the same page, and guided prioritisation — highlighting time consuming parts of the process and those business critical to improve.
Preparing for contextual inquiry
Having had some understanding of the existing process from the previous scope of work, I created hypotheses to communicate our assumptions to the client, which gave everyone confidence that we understood the areas of focus. The hypotheses served as a springboard for the discussion guide. To minimise bias, I did not over prepare, to allow for exploration and probing.
As the client was joining us on our visit, one of the difficulties I foresaw was minimising the discussion of solutions and influence of their presence. Therefore, I ensured to explain the purpose of our visit and the research goals prior to starting, and organised independent observations and time alone with staff to allow them to speak in confidence.
Conducting the Research
Observing application processing
During the first two sessions, I observed without probing, so not to influence, and to capture actions carefully. I also took note of their system setups and offline behaviours.
As the processing tasks were complicated and non-linear, I found it difficult to understand the logic and legalities behind them straightaway. Staff also processed different user groups in different ways, adding to the complexity. By observing 6 sessions and exploring the why’s during mini interviews after each session, the process became clearer.
Since I had the support of a business analyst, we took turns observing, taking notes and drawing a loose service map to better understand the real steps involved. Knowing the service to a basic level beforehand helped us ask better questions.
I also interviewed staff and senior management separately to understand deep rooted problems and their success metrics. Additionally, I organised interviews with other departments who were indirectly part of the processing to piece together how they influenced the main journey. This allowed us to paint a fuller picture and identify the interconnection.
At the end of each day, we had a team debrief to double check our understanding, compare notes and relay the information back to our team at home to speed up analysis afterwards.
Analysis + Prioritisation
What we discovered
Contrary to what we thought, staff didn’t process EP’s in the same way, although it was a standardised process. This was one big operational finding that underpinned others.
There were multiple instances of time wasting and forgetfulness through manual work between different systems and answering messages and calls unrelated to their cases. This led to staff frustration.
Due to safeguarding, each EP application was double checked and sometimes triple checked by senior management, leading to rework and wasted time.
“It just takes so long to do something so simple and sometimes you just forget.” - ETD Agent
Challenges with analysis
The research findings were difficult to analyse and categorise on two levels:
There was a vast amount of information to unpack in just a few days from many sources, so I pooled the team together for chunky affinity mapping workshops.
There were conflicting views on what some things meant, so I put a list of clarifying questions together to ask the EP staff.
There was pressure from stakeholders to see an initial set of possible solutions. I managed this by speaking to our team and categorising the findings into potential solution buckets, so we could prioritise our work and understand their appetite for each bucket, which were: technical, policy-focused, operational and behavioural.
Matrix and blueprint
I created a prioritisation matrix after the research analysis and discussion of solution buckets, to showcase the most crucial areas for improvement and the risks associated with them.
This was translated into a research report, which contained:
The what, why and where of the identified problems
Who the problems affected and the implications of it
Cost of the problems
Primary and secondary solutions + risks