Skip to main content
All CollectionsRunbooksPost Live Run
Post Implementation Review (PIR) Dashboard
Post Implementation Review (PIR) Dashboard

Learn more about the Post Implementation Review dashboard.

Cutover Documentation Team avatar
Written by Cutover Documentation Team
Updated over a week ago

A Post Implementation Review (PIR) is conducted after completing a project; and it is used to evaluate whether objectives were met, how effectively the activity was run, and lessons learned. In this article, we'll cover how Post Implementation Review dashboards work.

Overview


A Post Implementation Review (PIR) is conducted after completing a project and becomes available following the completion of a runbook.

The PIR dashboard provides you with a clear picture of how a runbook has performed, teams can evaluate whether objectives were met, how effectively a program was executed, and actions required to improve the next program or initiative.

It provides:

  • A runbook summary: (what the activity/event was)

  • Execution summary: (what went well, what didn’t, providing a high-level view to indicate possible improvements)

  • What kinds of resources you used (resource allocation)

  • An option to add qualitative feedback with text fields: (space for details about what happened, any relevant information, for example, performance was increased by 20%)

Who can use it?


Any user who wants to see the associated statistics after runbook completion can use it, although publishing the PIR dashboard is a task limited to runbook admins.

How do you access it?


A runbook admin gets a notification after completion of a runbook to view and edit relevant sections of the dashboard. Once the runbook admin has edited the relevant sections and reviewed the report, they can then publish it for others to view.

What problem does it solve?


The PIR dashboard allows you to quickly identify and evaluate the performance and benefits of your program for post-event analysis and future improvements. Examples include:

  • Cost reduction/ savings

  • Improved efficiency (doing more with less)

  • Support new business

  • Risk reduction/mitigation

  • Regulatory requirement

What are the components?


Overall Timing Summary


This shows the actual start and end times of the runbooks along with the duration. The component also shows the variance between planned and actuals so you can capture any delays. If started or finished earlier than planned, the color will be green. If started or finished later than planned, then it displays orange. If duration is quicker than planned, the color will also be green. If duration is slower than planned, the color will be orange.

Runbook Summary


This is an editable text field that allows the runbook admin to capture information related to the runbook, such as what the activity entailed and other important information.

Execution Summary


This is an editable text field that allows the runbook admin to capture information related to the execution of the runbook including what did and didn’t go well, and a high level summary to indicate potential future improvements.

Value Realization


This component allows you to choose a list of options indicating what the runbook allowed you to realize such as cost reduction/ savings, improved efficiency, risk reduction/mitigation, regulatory compliance, and others.

Performance vs Planned


This graph shows the planned duration of all tasks against the actual time it took for those tasks to be completed, grouped by stream and teams. It is a great way to see how well your teams performed and whether the tasks were completed earlier or later than expected.

Lateness


The Lateness component provides a breakdown of all the tasks on the runbook by how late they are based on the planned schedule. The figure for lateness is customizable to enable a larger or smaller definition of lateness.

User Participation


The User Participation component captures the involvement of teams and participants in a runbook:

  • Total involved: The total number of users that viewed the runbook and users that performed an action in the runbook.

  • Viewed runbook: Users who viewed the runbook but did not perform an action.

  • Performed an action: Users who performed an action in the runbook. E.g., starting a runbook.

Total Wastage Statistics


Wastage on a task in a runbook at Cutover is defined as the difference between time when the task became startable and the time when it actually started.

  1. Total Wastage is the wastage across all tasks ( the total amount of time that could have been saved, if tasks had been started as soon as they became startable)

  2. Average wastage is Total Wastage/Number of tasks (the average amount of time a task was in a startable state)

  3. Wastage as a percentage is ((Total Wastage/Total duration ) x 100] the % of the total duration of the runbook that can be attributed to wastage)

Wastage by Stream


This component shows the wastage on the runbook grouped by streams or teams. Hovering over the bars of the component shows the number of tasks that belong in the group.

Total Wastage


This is a component with wastage intervals that show the number of tasks that wasted that interval of time by not starting when they became startable. Intervals are the same as the Wastage component; 0-5, 6-30, 31-60 and 60+ minutes.

Planned vs Actual duration


This burndown chart reflects the duration of time remaining for tasks on the critical path of the runbook over time. The graph represents the duration of time remaining for critical path tasks on the runbook on Y axis and time on the X axis. As time progresses, during live run, duration of time remaining for tasks on the critical path will reduce as tasks are completed over time.

Setup Instructions


A default PIR dashboard will be available in all instances. Users with appropriate permissions will be able to configure existing components, add or delete components. Each account will have one PIR, you cannot set up multiple PIR’s for an account.

Did this answer your question?