Show
KPIs are critical for the success of any organisation. They are a measure of progress against goals. However, not every measure is a KPI. Put it in the words of Albeit Einstein,“Not everything that can be counted counts, and not everything that counts can be counted”. Good performance measures allow for comparisons to be made to enable performance improvements.
By measuring that which really matters, management can focus on those attributes and improve performance. Care should be taken that, what the organisation is calling performance measures are not milestones.
Any agency with a policy, delivery, monitoring or sector oversight role needs a robust performance measurement framework to know whether its activities are effective and efficient. By measuring the performance of your agency or sector, you can:
To select and prioritise the outputs that have the greatest impact towards desired outcomes, your agency needs to evaluate what impact its activities are having. An agency uses its resources to provide services or undertakes activities - its outputs - with the objective of delivering specific outcomes for New Zealanders. To assess whether your agency's outputs are contributing to the achievement of desired outcomes, you need to measure the difference that your agency is making - the impact of its interventions.
To assist managers, planners and analysts in State sector agencies develop robust performance measurement and reporting frameworks, Central Agencies have prepared this new resource: Performance Measurement: Advice and examples on how to develop effective frameworks. The guide focuses on how to make performance measurement output, impact and outcome-orientated, so that results are easily understood and visible to senior managers and other stakeholders. The guide explains the key steps in the performance measurement cycle; why each step is important; and what activities are undertaken under each of the steps.
The guide has six modules. Each module covers a key element of the performance measurement cycle. How far your agency needs to delve into each module will depend on how advanced your current performance measurement systems are.
Here is an example of how performance measurement helps agencies track the progress they are making towards the achievement of their outcomes, and how it can inform decision-making. The table below shows various indicators relevant to the land transport sector, and compares them to the investment in safety funding in that sector. The indicators show that increased safety funding in the sector has had a number of positive impacts for New Zealanders. Road safety performance measures
If you are charged with developing and implementing a performance measurement framework for your agency or sector, here is a checklist of the key aspects to consider:
Performance measurement is a precursor to effective and informed management. Performance measurement is crucial to agencies with policy, delivery, monitoring and/or sector oversight roles. It enables agencies and sectors to chart the progress they are making in improving outputs, outcomes and value-for-money, and to take corrective action if required. Several years after Managing for Outcomes was introduced, and nearly twenty years after the output management regime was put in place, significant progress has been made in measuring results. Nonetheless, information gaps still exist in many areas 1 , and different agencies are at very different stages of developing an integrated performance measurement capability. Therefore, Central Agencies, with the support of the Office of the Auditor-General (OAG) 2 , have developed this guide to help and encourage agencies to critically assess their progress to date, and to map out a clear path forward. The guide aims to help agencies develop stronger, more robust performance measurement and reporting capabilities. It is intended mainly for Departments and Crown Entities, but other entities in the State sector may also find it useful. The guide is intended to assist planners, managers, analysts and those involved in measuring performance within State sector agencies on how to assess delivery and progress in achieving core outcomes, on an ongoing basis. The guide will help State sector agencies to:
This guide builds on guidance already issued on measuring State sector performance, including:
This guide should be used in conjunction with the Strategy Primer 7 , which stresses the role of measurement in managing, developing and monitoring the performance of major strategies. 1 Most commonly in showing effectiveness, value-for-money, and technical and allocative efficiency 2 OAG was consulted in the drafting of this document and acted as peer reviewer. 3 See Central Agencies' guidance on improving accountability information at: https://psi.govt.nz/iai/default.aspx. 4 See the Managing for Outcomes results guidance at publicservice.govt.nz/mfr-mfo-guidance 5 See the detailed Pathfinder guidance at: http//fin.publicservice.govt.nz/pathfinder 6 See the guidance developed by the OAG at: www.oag.govt.nz/2002/reporting/docs/reporting.pdf See also The Auditor-General's observations on the quality of performance reporting at: www.oag.govt.nz/2008/performance-reporting/docs/performance-reporting.pdf 7 See the strategy printer at: www.treasury.govt.nz/publications/guidance/strategy/strategyprimer.pdf
A robust performance measurement and reporting system is needed to comply with the Public Finance Act 1989 and the Crown Entities Act 2004, as relevant to different agencies. Both acts require State sector agencies to identify and report on performance 8 . This guidance outlines the key steps required to build and run an integrated performance measurement process that will help agencies comply with the legal requirements of these two acts. Progress in the above areas will also help agencies and sectors to show progress against the Development Goals for the State Services 9 . Two of the six Development Goals, in particular, can only advance through the adoption of good performance measurement practices: The Value-for-Money Goal: Good performance measurement capabilities and measures allow agencies to show value-for-money. Measures help managers to improve the delivery, efficiency and cost-effectiveness of policy and operational outputs, and show progress against outcome goals. Measures are required for both internal monitoring, and statutory reporting. The Coordinated State Agencies Goal: Enhanced performance measurement improves coordination and collective results by informing decision-making, clarifying shared outcomes and production targets (outputs etc.), providing the feedback needed to adjust strategies and plans, and creating baselines against which progress is tracked. 8 See sections 19 and 40 of the Public Finance Act 1989 (Reprinted 16 September 2005) and sections 139 - 149 of the Crown Entities Act 2004. 9 For a full list, see: publicservice.govt.nz/development-goals
This guide focuses on helping you to make performance measurement output and outcome orientated, so that results are easily understood and visible to senior managers and other monitoring agents. Most importantly, good measurement frameworks track achievement against key priorities. Throughout this guide, efficiently-delivered outputs are seen (in your own intervention logic) as precursors of both improved outcomes and value-for-money. Ex ante specification of results and performance measures is the 'foundation stone' of performance monitoring. Ex post reporting focuses on areas where expenditure, effort and expectations are significant. In particular, this guide focuses on the following areas:
This guide should be used on an ongoing basis. Leaders learn in an iterative way, by reviewing performance across successive planning, delivery and monitoring cycles. The guide should be used at the beginning of each planning cycle, and as your performance measurement systems develop, to help you define ministerial and management needs, and to improve on past reports. External reporting only focuses on outcome objectives and outputs, at an aggregate level. This guide will help you define outcomes, impacts and outputs. These will help you to identify, specify and report on key measures such as the quantity, quality and coverage of major outputs, efficiency, impact and cost-effectiveness. While external accountability documents are important, the most intensive demand for performance information should come from Ministers or managers responsible for a Vote or sector. The guide should be used to ensure they get the 'rich' information needed to make good decisions. This information is likely to be more detailed, and disaggregated, than is reported externally. However, it is critical that the same body of data that is used for internal decision-making be used for any external reporting.
This guide is broken down into six modules. They are not prescriptive. Agencies can delve into the modules to suit their needs and operating environment. The content of each module is outlined below.
Real examples from across the State sector are used to illustrate aspects of good practice. Many of these illustrations show a work in progress, not the 'end state' that is sought. If you wish to discuss the details of the illustrations, please contact the relevant agency. This guide also contains three appendices to provide further information to readers, which can be used as part of the document or taken as useful stand-alone references.
This module explains why performance measurement is critical within the State sector, and what can be gained from effective measurement. It helps you define 'the results that matter most' to performance, and scope out a monitoring framework and sets of measures to track progress. Measurement is a core activity for any agency or sector that is focused on delivering results for New Zealanders. Performance is measured primarily to allow you to maximise the results that are meaningful to your organisation by adjusting what you produce, using the capabilities and funding available. Transparent reporting to Parliament is a statutory obligation. You also have obligations to report transparently to Ministers and stakeholder groups. To fulfil these obligations, agencies usually use a sub-set of the detailed information they need to run their business well. It is good practice, therefore, to build performance measures into regular planning and decision-making as well as using them to report to Ministers and Parliament.
Performance measurement is an iterative process that must be repeated on an ongoing basis. Figure 1 outlines the iterative steps in the performance measurement cycle. As understanding grows, measures and reporting improve, and are used to improve the quality and reach of the services provided. Improved services, in turn, will provide greater impact and value-for-money. It is important to remember that all steps in the iterative process outlined in Figure 1 must be completed to measure performance well.
Delivered well, performance measurement supports Ministers and State sector leaders in at least three ways:
State sector performance should be assessed and measured at three levels as illustrated in Figure 2. The three levels are outcomes, intermediate outcomes or impacts, and outputs. Information is typically required to be collected at each level on an ongoing basis in order to track progress and monitor trends over time. At all three levels, results must be linked back to resources (funding and capabilities) so that internal and external decision-makers can assess value-for-money. Figure 2: The three levels of measurement
OutcomesOutcomes set out the broad goals your agency is seeking to achieve. They are measured to confirm aggregate results and enhance decision-making. Outcomes flow directly from ministerial requirements and priorities. The first step in developing an outcome-based decision-making system is to identify the 'vital few' outcomes that are priorities for your agency or sector. These should be the outcomes that your agency has the most direct influence in achieving. Outcomes are firmed up through strategic planning and direction setting. Outcomes set out your long run priorities by addressing the question: "What are our goals for New Zealand and New Zealanders?" Outcomes are typically tracked using outcome indicators. These are used as proxy measures of your agency's performance because high level outcomes are seldom directly attributable to the activities of one specific agency alone. External factors 10 frequently drive changes in outcome indicators. Multiple agencies may also affect the same outcome. For instance, border security is shared between Customs, Labour (immigration), and Agriculture and Forestry. Intermediate outcomes or impactsYour agency must focus on this crucial middle-layer of the performance measurement framework, because it allows you to articulate the effect that your agency's services and interventions are having on New Zealanders. This level is called 'intermediate outcomes' or 'impacts'. Understanding it will allow your agency to determine what difference it is making through the services it is providing with its outputs, and to discern progress towards the achievement of its outcomes. Ultimately, it will enable your agency to interpret what impact its policies are having. In other words it will help you answer: "What difference are we making for New Zealanders?" Intermediate outcomes/impacts can be measured through the use of impact measures or indicators. These measures of intermediate outcomes are crucial to the performance measurement process because they underpin performance-based management. Specifically, they:
OutputsIn the State sector, boards and chief executives (and their managers and staff) control the means of production and are accountable for producing outputs. Outputs represent the means your agency (or sector) uses to create impact. Outputs are the services your agency or sector delivers through their interventions, such as implementing policy, running regulatory or control systems, and delivering core services in defence, education, health, housing, justice, welfare and so on. Output measures address questions such as: "What service was provided?"; "Who got it?" and "Was delivery efficient and effective?" Linking measurement to resources, planning and deliveryFigure 3 shows how the resources your agency has will be used to deliver outputs, which will flow into impacts and outcomes. It also shows how outcomes form the basis of your agency's planning, and how impacts and outputs flow from defining outcomes. In terms of delivery, the process works from the bottom up: your agency delivers outputs to New Zealanders; these outputs will have impact; over time, these impacts will contribute towards the achievement of outcomes. Figure 3 illustrates how resources link to the three other levels by using a simple land transport sector example.
10 Such as public attitudes, social and behavioural evolution, alternative providers (such as NGOs), and markets.
Progress achieved at the three levels of the framework can be gauged through assessing results. 'Results' refer to what has been achieved at the intermediate outcome/impact and outcome levels. However, they can also cover what has been achieved through the delivery of outputs. Results should be directly attributable to what the agency has undertaken. This is in contrast with outcome indicators, which may not be so directly attributable to what your agency does. Table 1 presents results, impacts and indicators for outputs, intermediate outcomes and outcomes using the same land transport sector example as in Figure 3. Table 1: Outputs, intermediate outcomes, outcomes, results and relevant indicators
There are many ways of showing impact. All try to reduce the chance that changes in outcome indicators are due to something other than the outputs you delivered. Common ways of showing attribution to your outputs 11 include:
11 A range of techniques is presented in: www.treasury.govt.nz/publications/guidance/performance/demonstrating/13.htm#_toc3
Focusing on developing measures at the three levels (outcome, intermediate outcome/impact and output) permits a building block approach to performance measurement. At each level, your agency needs to measure its performance in a constructive and auditable fashion, and to map progress back to the capability and funding you have invested in the area. Having got some progress, your agency can then link its resources and interventions to the impact they are having and then to changes in core outcomes. This allows management to make decisions based on an assessment of the value-for-money provided. One word of caution: multiple outputs often contribute to a given intermediate outcome or outcome, as represented by the multiple arrows in Figure 3. If this is the case, the timing of funding increases can still be contrasted with changes in intermediate outcome or outcome. If real 12 increases in funding do not improve outcomes, value-for-money should be questioned. Every agency operates in a unique environment with differing services, priorities, budgets and stakeholder relationships. Therefore, each agency needs to adapt this framework to suit its own contexts, but retain the overall approach of demonstrating the links between resources, outputs, intermediate outcomes and outcomes. The ultimate product sought through your measurement framework is a clear, evidence-based 'performance story' that links resources and outputs to positive results. 12 i.e. after allowing for inflation, by adjusting nominal prices using an appropriate price index.
Before performance measurement can begin, your agency must establish a process by which it will measure performance, and agree resourcing and deliverables. To do this, key issues to resolve include:
Figure 4: MAF's performance measurement roadmap This is a landscape diagram. Open it here as a PDF file and print to view (24k) Figure 5: Linking MAF's objectives with its expected outcomes This is a landscape diagram. Open it here as a PDF file and print to view (333.6k) 13 You may also need to consider the coverage (or targeting) of outputs.
This module explores how your agency can look critically at current measurement frameworks, establish what it knows or does not know about its performance, make the enhancements needed to fill information gaps, and manage expectations about what the measurement framework will achieve and when. Agencies typically have an outcomes framework and performance measurement processes. In the experience of Central Agencies, however, the information required to show prudent use of resources and the effectiveness of interventions is often disseminated in different parts of the agency or sector. Thus an early step in building a measurement framework is to assemble existing information into a 'fact-based story' of how your resources and major outputs have contributed to improving outcomes. This story should be based on evidence rather than mere assumptions about cause and effect of what your agency is trying to achieve. This picture should be built at a strategic level and bring together the performance story for the whole agency. In building the picture, you will better understand how well performance metrics are being produced and used, and will identify gaps and development needs. The following questions may help you to identify information gaps and development needs:
Understanding the responses to these questions will help you identify and concentrate on the priority areas where performance measurement activity needs to be focused.
Durable approaches to performance measurement focus on major purchase decisions, issues and performance questions that your agency, sector and/or stakeholders face on a regular basis. It takes time to develop a fully functional measurement approach, test it, and integrate it into an agency's strategic, planning and business processes. A step-by-step approach will allow you to focus on ministerial and operational priorities now, while working in the long run towards a system that provides information on all key aspects of performance. In the near future, your measurement framework must meet the needs of Parliament, Ministers, managers and other stakeholders. Communicate and manage expectationsEarly in the performance measurement process, a key step is to clearly set out what can reasonably be expected from the process. It is unrealistic to expect that every aspect of your agency's activities will be measured. Once your agency has a tangible work plan, it is imperative that senior managers, Ministers and stakeholders understand what will and will not be delivered in different timeframes. You also need their confirmation that measurement effort is focused on their priority areas. Build dataData availability constrains measurement. In the short term, develop measurement frameworks that are 'fit for purpose', using existing data to good effect. In the medium term, gather new data to fill critical information gaps, where it is cost-effective to do so.
Performance measurement is about tracking and understanding relative progress, in order to make more effective progress in the future. Links and correlations are often established by looking at trends over time, e.g. when policies, outputs or resources changed, or by establishing comparison groups, e.g. using disaggregation or international benchmarks. It is common to spend as much time setting-up comparators as producing measures. Quantitative vs. qualitativeQuantitative measurement of every aspect of an agency's function is not feasible, nor is it necessary. Many outcome, impact and output measures are qualitative. Whether measures are qualitative or quantitative, they must give users a useful indication of what has been achieved, and be comparable to other measures.
It is important that the results achieved through the delivery of services be linked back to the resources that have been used to deliver them because it establishes the cost-effectiveness of these services. You can do this by costing the outputs that you believe are primarily responsible for delivering results. Managers manage the resources, capabilities and processes used to produce outputs. Well-run agencies must know how their resources generate results, and build an understanding of how the reallocation of resources could generate even better results. To help with your thinking in developing your overall framework, Table 2 below gives some common measures of agency or sector performance that may be used at input, output, impact or outcome level. Table 2: Common measures
This module explains the importance of developing a stakeholder engagement plan to further your measurement process, build support for your measurement approach, and ensure that your process delivers the information needed by internal and external stakeholders. This module also provides information on how to undertake stakeholder analysis and how to effectively analyse stakeholder relationships, in order to help you develop an engagement plan. A 'stakeholder' is an individual or a group who can affect, or is affected, by the achievement of shared goal. This shared goal could be a particular outcome, or outcomes, or the delivery of specific outputs or services. Key stakeholders typically include other agencies, external organisations or other parts of the same agency. Each stakeholder will have a different kind of relationship with your agency. Hence, it is worthwhile investing time in understanding how to manage these relationships to good effect.
As part of your performance measurement approach, you will need an engagement plan that states who you need to talk to, about what, and when. Having such a plan will clearly set out for you when and how you will engage with your key stakeholders in order to advance your performance measurement. The plan will aid the allocation of your resources and will ensure that you are able to advance your performance measurement in a collaborative fashion. You need an engagement plan in order to:
The plan should be shared with all the relevant stakeholders and be revised as events progress and dynamics change.
Stakeholder analysis is a tool that is useful in several contexts. It is a crucial aspect of agency performance measurement, as it helps agencies to understand who their outputs, impacts and outcomes are shared with, and how. It therefore underpins collaborative approaches to measuring performance. Stakeholder analysis can be conducted at any level in an agency and can be used internally and externally. It allows agencies to understand who their stakeholders are and the nature of the relationship with them. It also enables agencies to see how they need to work with their respective stakeholders, thus facilitating the development of strategies for engaging with stakeholders. Stakeholder analysis can provide the user with the following benefits:
What is involved?Stakeholder analysis clarifies the nature of relationships a specific group within your agency has with stakeholders by assessing the power and interest they have over the common goal you share. All stakeholders will have varying degrees of power and interest. Figure 6 outlines the basic dimensions of the power-interest relationship that the group will have with its stakeholders. It broadly demonstrates the level of effort and the nature of the relationship that will need to be maintained with stakeholders of each relationship type. How to conduct stakeholder analysisFigure 7 shows the stakeholder analysis process. This process allows you to identify how you should coordinate with each stakeholder, and which stakeholders should be given priority in the engagement plan.
There are four broad steps in the process. They are: 1. Define the shared goal: clearly define the shared outcome or shared output for the stakeholder group: what are you trying to achieve with the service you are delivering, with the policy you are developing or the project you are planning? 2. Define stakeholder group: list the broad stakeholder group for the goal(s) in question. Look internally and externally and cast the net as widely as possible. Consider only stakeholders who can influence progress towards the shared goal. Consider both those stakeholders within the State sector and outside of it. 3. Analyse the relationships: the analysis should cover a number of steps in order to provide a basis for developing the engagement plan. However, there are different approaches to the analysis, depending on resources available. The key aim of the analysis is to understand the nature of the relationships with each stakeholder, and to prioritise the engagement plan accordingly. The next section outlines the detailed steps and highlights which steps are essential, it also provides some examples. 4. Develop the engagement plan: the engagement plan should build on the information developed during step three. It should promote coordination by setting out how the stakeholders can work together. It should take into account the nature of the relationships between the user and the different stakeholders. The plan should:
This section details how to analyse stakeholder relationships, a key part of the stakeholder analysis process described above. The aim of the analysis is to understand the nature of the relationships with each stakeholder, and to prioritise the engagement accordingly. Best results will be gained from following all of the steps, however for those with limited resources, the essential steps are marked. Step 1 [essential]: map stakeholders' level of interest and powerMap the stakeholders onto a matrix that distinguishes the levels of power and interest they have over the goal. To develop the level of interest, consider how high the goal in question is in the stakeholder's priorities. In terms of power, consider the ability of each stakeholder to influence or hinder progress towards the goal in question. For example, your agency may undertake a stakeholder analysis and identify Agency A as a major stakeholder in achieving a particular outcome. It may also identify private organisation B, as key to achieving the outcome, along with two Crown Entities, C and D, who are also important in achieving the outcome. However, A, B, C and D all have different degrees of power and interest in achieving the outcome from their own perspectives. Figure 8 below illustrates the power/interest relationships for the four stakeholders in this particular example. Step 2 [essential]: determine types of stakeholdersThe types of relationships with each stakeholder should be analysed based on their power and interest level. Figure 8 above illustrates the relationships that will exist, based on these criteria. Those who are high on both axes should be high priority for the engagement; those who are low on both axes should be a low priority. Stakeholders with a high level of interest but low power relationship will need to be kept informed on progress, whilst those with a high power but low level of interest should be kept satisfied with the overall progress, through ongoing information and involvement in planning. Following the example above, the Crown Entity D would be the top priority for engagement in this case. Agency A and the other Crown Entity C would be medium priorities for engagement. The private company B would need to be kept informed but would not be the focus of the engagement plan due to their low level of power. Step 3 [optional]: consider the stakeholder's perspectiveA more in-depth analysis will provide more detailed consideration of the nature of the relationships, by breaking the level of interest down into two further criteria:
Note that mandate should not be confused with level of power. Power refers to the overall influence (direct or indirect) the stakeholder has over the goal. Mandate refers to structural, official claims, or the level of legitimacy the stakeholder may have over the shared outcome. Those stakeholders that have a strong mandate to act, combined with a high degree of urgency and a high degree of power will need to be given priority in the subsequent engagement plan. Figure 9 below illustrates how this analysis can be laid out and labels the key stakeholder types.
Step 4 [optional]: consider characteristics of each stakeholderFor the higher priority stakeholders, it is also useful to consider what relationships they have with other stakeholders and what kinds of relationships these are. For example, are there strong agreements (alliances) in operation between any of the stakeholders? Or do any of the stakeholders have conflicting interests with regard to the goal? It is also useful to consider what resources and capabilities these stakeholders have at their disposal as this constrains what they can do. The ease of access the user has to each stakeholder can also support or inhibit the engagement process. For example, can the stakeholder easily be met on a regular basis? Can the senior leaders of the stakeholder group be contacted? Is face to face engagement impractical due to geographic constraints? Using the previous example, Table 3 below illustrates the characteristics of the stakeholders. Overall access to stakeholders A and D is good, but access to B and C is not good due to the organisations being geographically separate from the user. However, B and C have more resources to be able to contribute than A and D. There are also political issues at play in this example, as B and C have been in conflict with each other on this issue for some time. All of these dynamics must be accounted for. Table 3: Characteristics of stakeholders
Step 5 [essential]: draw together the overall picture and prioritise stakeholdersOnce the above steps have been completed, a single picture of the overall stakeholder environment can be developed. This can be done in a table that lists the stakeholders and their characteristics, and prioritises which stakeholders should be the focus of the engagement plan. The columns given in Table 3 above will depend on which steps of the above analysis were conducted. Following the example used previously, Table 3 illustrates how D and A are the highest priority stakeholders who will need the most careful engagement. This is due to their own characteristics and the fact that they are allied with respect to this outcome. The conflict between B and C should also be considered. B should be given higher priority than C in the engagement as they have a stronger mandate over the outcome and C cannot be accessed easily for engagement, although they have a strong resource pool available. The user would need to focus on how to bring D into some kind of formal arrangement so they can be a key contributor towards achieving the overall goal. A will also need to be closely engaged but account should be taken of their close relationship with D. B and C will also need to be engaged with, but will need to be carefully managed due to the conflict between them. However, this aspect of the engagement should focus on C as the priority given their larger degree of power over the issue.
Module 1 outlined the three levels of performance:
The key to successful performance measurement is the effective articulation of outcomes, intermediate outcomes and outputs against which progress and costs are measured. This module describes how to define outcomes, intermediate outcomes and outputs in a way that enables effective performance measurement. In order to do so the module looks briefly at the characteristics of a good outcome, intermediate outcome and output, and how these are measured. The module also refers to tools that can help define these aspects of your measurement framework and concludes with some good practice examples. The next module will then look in detail at constructing particular measures at each of the three levels.
Outcomes are specific characterisations of what an agency or sector is working to achieve, rather than visionary or aspirational statements that are difficult to measure. Outcomes are defined as: "a condition or state of society, the economy or the environment, and include changes to that condition or state. In effect, outcomes are the end result we [want] to achieve for New Zealanders. Outcomes describe 'why' we are delivering certain interventions on behalf of New Zealanders 14 ". Outcomes must be clearly defined before you try to define what outputs and impacts will help achieve these outcomes. Outcomes and intermediate outcomes differ from outputs in that they do not specify what is being provided (goods and services), but rather the changes expected in users lives after outputs are delivered. By definition, intermediate outcomes and outcomes are future-looking, generally reflect how much value is delivered to users, and can be measured only after outputs are delivered. One key to performance measurement is to clearly define both the impacts at the intermediate outcome level and the long-term outcomes you are working towards in a way that represents reasonably the major results expected from your outputs. If your outcomes and impacts can be - poorly- described as a 'grand vision', they are not clearly defined and measuring progress against them will be difficult. If this is the case, it may be visible as a 'disconnect' between outputs and proposed intermediate outcome impact measures. Measuring outcomesOutcomes and intermediate outcomes measure or track our achievement of strategic (rather than delivery) goals. Results at these two levels thus shape policy and service development in the medium term. Because an agency wants to know where outcomes are being achieved, as well as how interventions improve poor outcomes, multiple measures will typically be needed for each of the 'vital few' outcomes. These include both outcome indicators and impact measures. It may seem counter-intuitive at first, but outputs can have a positive impact without contributing to the outcome sought. The disconnect may exist due to poor targeting, which can happen if outputs fail to reach the target population. Reasons can include capture by other populations, poor resource allocation processes, or insufficient outputs for the size of the target group. Here is a theoretical example: the education sector seeks to lift the tail of under-achievement. Research shows that under-achievement is more prevalent in low-decile schools. Research also shows that intervention X can improve the educational achievement of individual children. However, measures show that intervention X does not contribute to improving the outcome sought, which is reduced rates of under-achievement. This may be because while intervention X helps average kids become excellent in high-decile schools, because of poor targeting, it was not delivered in sufficient volumes in low-decile schools. 14 See the guidance developed by The Treasury on improving accountability information at: https://psi.govt.nz/iai/default.aspx
Intermediate outcomes or impacts are the critical middle layer of any measurement framework. Impacts are described as: "the contribution made to an outcome by a specified mix of interventions. It normally describes results that are directly attributable to the interventions of a particular agency. Measures of impact at the intermediate outcome level are the most compelling performance indicators for the State sector, as they demonstrate the change in outcome attributable to the specific interventions of the agency. Performance information around impacts enables Ministers and the public to determine the effectiveness of agency performance 15 ". The intermediate outcomes level is important as it allows leaders to track progress towards outcomes, assess what difference they are making in the short/medium term, check the right mix of outputs is in place, and assess cost-effectiveness by direct or indirect means. Hence, the middle layer is crucial to an agency's performance measurement process in multiple ways. Measuring intermediate outcomesA key challenge in developing any performance monitoring approach is to link outputs to the impacts at the intermediate outcome level and to outcomes expected, and (when feasible) costs. At this stage, the goal is simply to propose what the linkage is. Module 6 deals with proving that the linkage exists. This stage consists of considering how you will do this, and identify the data you will need. See section 'Definition tips' for specific tips to identify linkages. 15 Ibid.
Outputs are "those final goods and services that are produced by one organisation for use by another organisation or individual 16 ". Outputs define your major products or services, the timeframe in which they are delivered and the cost to deliver them. Outputs are the 'building blocks' we use to achieve impacts and outcomes. Therefore a clear picture of outputs must be provided early in the development process. This picture should draw heavily on accountability documents, output plans, business plans and other documents that articulate the services that agencies provide. Outputs statements describe key goods and services in reasonable detail. Output statements and measures need not specify everything that an agency produces, but must define key goods and services in a manner that both complies with the relevant statute 17 , and is useful within the measurement framework. Outputs should be specific, homogenous and clearly articulated in terms of their nature and their performance dimensions. Measuring outputsKey measures of outputs typically assess: quality, quantity, targeting, timeliness, location, cost and coverage 18 . Coverage measures provide confirmation of services reaching key groups. Trend information can also allow your agency to assess how production and efficiency have changed, and to test whether impacts or outcomes changed as predicted by the intervention logic. Output measures are crucial when attribution of impact is poor or disputed. In such cases, output measures may be used as important proxy measures. They show whether, when impact is unknown, services reached the intended group, at the right time and price. They also show you managed your resources prudently, i.e. in an economical and efficient manner. 16 Ibid. 17 The Public Finance Act 1989 or the Crown Entity Act 2004. 18 See also the guidance developed by the OAG at: www.oag.govt.nz/2002/reporting/docs/reporting.pdf
In developing your definitions, remember that you will need to assess what can be delivered in different timeframes, given current data and the data improvement plans you will put in place. Be realistic about what can be achieved in the time that you have available. A key purpose of performance information is to get sensible reallocation of resources 19 , that is to be able to reprioritise spending toward the activities that achieve the best outcomes. This requires open and engaged leaders. The governance structures you put in place will be critical to achieving this. At a working level, these structures may focus on delivering good information. At executive level you will need to engage the leaders with the right authority to make resource decisions about your agency or sector priorities. Reflect ministerial prioritiesIntermediate outcomes and outcomes should reflect ministerial priorities and both your ongoing operational and statutory responsibilities. They should also be coherent across your sector. Hence, the engagement strategy defined in Module 3 should be drawn upon at this step, to create effective engagements upon which shared impacts and outcomes can be constructed. Other strategy documents produced by agencies should also be coherent and clearly linked with the definition of the outcomes and impacts used within the performance measurement process. Confirm your framework is fit-for-purposeBefore initiating the more labour and cost intensive processes of building data acquisition and measurement systems, confirm that your framework is coherent by using simple tests. You will have your own ideas on how to do this against the needs of your sector. Two generic sets of tests are, however, laid out below to help ensure you have 'checked all the angles'. The first test consists of answering some principle questions. Ask yourself if your measures:
The second test uses the FABRIC touchstones 20 , which are also qualities your leaders will typically look for from the measurement process as a whole:
19 Including funding, infrastructure and capabilities more generally. 20 After the United Kingdom's FABRIC principles. See www.hm-treasury.gov.uk/media/3/7/229.pdf
Some of the techniques that can be used to specify outcomes, intermediate outcomes and outputs are outlined below. Other methods for specifying outcomes and impacts are described on the Pathfinder site 21 . Scenario-based planningScenario-based planning can be a useful tool for identifying and testing the outcomes, impacts and outputs you need to measure. It may also help identify risks that need monitoring. Scenario-based planning can be especially useful for defining outcomes as it can provide the user with a detailed view of a range of possible futures. These preferred futures can then be linked back to the present day to assess how feasible it is to achieve them. A detailed articulation of outcomes can then be generated, based upon those preferred futures which are deemed achievable. The scenario-based planning process will thus help the user see how these futures might be achieved, starting from a present day context, and what may threaten or hinder progress towards these futures. For example, there are a number of trends currently in play in the sustainability sector that will shape the way the global and local environments play out in the long-term. By considering these trends and drivers, and the range of both positive and negative 'worlds' that may be realised, you can begin to identify what will shape a positive future for New Zealand. You can then characterise this future as an outcome with specific definitions about what it will look like. At the same time, knowledge will have been built around what risks will threaten the realisation of that future and what intermediate outcomes need to occur in order to achieve the overarching, long-term outcome. Visioning to generate outcomesScenario-based planning can be time intensive and may not be appropriate for all agencies. A more rapid but less robust way of looking to the future is visioning. The basic approach is to hold participative, facilitated sessions with relevant stakeholders. The sessions encourage creative thinking about desirable futures and focus on bringing convergence among the group around these futures. The process also promotes more detailed generation of outcomes and impacts by focusing on analysis of future contexts, based on what is known about the present and any prominent trends shaping the future. 21 For example, see http://fin.publicservice.govt.nz/pathfinder/Links.asp
The following example is a shared outcome of the border sector group of government agencies led by New Zealand Customs. It provides a description of the rationale for having this outcome, along with details of what realising this outcome will look like. It should be noted that this outcome is draft and work in progress.
Source: New Zealand Customs Service Outcome Example 2: reduced risk from insecurityFigure 10 illustrates the characteristics for a particular intermediate outcome for the New Zealand Defence Force (NZDF), related to managing regional and global risks. The diagram articulates as to how the services NZDF provides contribute to the intermediate outcome, and what activities it needs to undertake in order to achieve it. The activities have been articulated within a range of 'Employment Contexts' (ECs). These are given on the table to the right of, and below, the diagram. In summary, the figure gives an overall picture of what will need to be achieved to realise the intermediate outcome. By doing so, it gives the intermediate outcome definition, context and a logical basis. Figure 10: New Zealand Defence Force's articulation of an intermediate outcome This is a landscape diagram. Open it here as a PDF file and print to view (19.5k)
This module explains how to develop measures and indicators, and how to ensure your agency is collecting the data it needs to chart progress and report to leaders and key stakeholders. This module also contains a useful set of graphical illustrations of various types of performance measures from different sectors and agencies in the State sector. These aim to provide you with a useful 'aide-memoire' for certain aspects of measurement discussed in this guide.
Once your framework articulates performance in terms of outputs, intermediate outcomes and outcomes, you can develop measures to chart progress. There are six steps in this process:
Because each component is dependent on the other, you need to undertake these activities in parallel and, where possible, iterate what has been produced as progress is made. If major problems are identified, you should plan how to address them in the future. What good measures look likeThe measures you use should be fit for purpose in terms of representing progress at each level of the performance framework. Measures may cover any aspect of the output, impact or outcome which is relevant to the overall performance story. Where possible, a relative scale should be used to represent progress against specific measures. Remember that impact measures are among the most important measures in your framework. They provide feedback on performance by linking output and outcome levels. They will help your departments make good decisions on where, when and how to intervene. They help you to fund interventions that deliver most benefit to New Zealanders, from their tax dollar. They help you defend funding, by showing results to Parliament, your managers, your stakeholders and to the public. Using the FABRIC principlesApplying the FABRIC principles may help you to prioritise and refine measures. Further, the FABRIC framework outlines quality criteria for performance measures. Table 4 outlines the relevant questions to ask yourself to test the robustness of a specific measure against these criteria. Table 4: Testing a measure using the FABRIC criteria
Don't forget that external factors beyond your control can affect intermediate outcomes and outcomes. In terms of reporting your measures, beyond the principles listed above you should also ensure that your measures are:
Identifying meaningful comparisonsWhile you may initially want to put your efforts into developing basic measures, you may soon need to focus on refining comparison groups. Performance measurement is most useful when you can make meaningful comparisons. Year-on-year comparisons can be made using the basic data needed to produce measures. Other comparisons may require comparative statistics or extra data. New data may also be needed to allow for the effects of external factors, e.g. market conditions. Usual types of comparisons include:
Gathering dataData needs to be gathered to provide an evidence base for judging progress against each of the levels of the framework. The key requirements for any data set are that: it covers key aspects of performance; and that measures are valid and verifiable representations of each key aspect of performance. For measures to be valid, the underlying data must be valid. Data may be quantitative or qualitative and may express different aspects of the output, intermediate outcomes or outcome, as long as the measures themselves capture changes in performance. Broadly, there are two different kinds of data that may be used: primary or secondary.
Table 5 below summarises examples of different kinds of metrics developed to measure outputs, impacts and outcomes. Table 5: Examples of performance measures by data type
Trial productionTrial production may occur initially from limited data sets, or using labour intensive methods. In early days this can help you make quick progress. If you are sure you have good definitions of measures, and data is in operational data sets, you may write dedicated software instead. In most cases, you will produce measures using a mix of existing systems and ad hoc analysis. In most cases, adding new measures to your framework production runs will result in learning and change. It is important not to over-invest in software development until you are fairly sure you have a durable solution, or sure that the software can be adapted to meet emerging needs. Refining the frameworkIrrespective of which development path you follow, it is important to:
This section should help you develop your own sets of measures to track progress in different ways. It sets out what each type of measure involves, why it is useful, the type of management questions these measures can inform and some simple approaches to constructing the measure. Examples and illustrations are included alongside the measures. More information about how managers can use measures to make informed management decisions is provided in Module 6. Multi-level measurementFigure 11 illustrates indicators under development by the Workplace Group of the Department of Labour for measuring progress towards their outcomes. Their approach has been to evolve a multi-level outcomes hierarchy, mapping Output Sub-Classes through various levels of intermediate outcomes towards their overall outcome: "Productive work and high quality working lives". It illustrates the indicators for the intermediate outcomes of the Workplace Group section of this outcome hierarchy. Please note that this work is still in progress. Figure 11: Department of Labour's performance for the Workforce Group This is a landscape diagram. Open it here as a PDF file and print to view (19.5k) Measuring capacity growth
The example below shows the type of picture that can be built using this type of data. Example: measuring the capacity growth of the New Zealand Defence Force (NZDF)When measuring its capacity to deliver outputs and outcomes to stakeholders, the largest determinant for NZDF is the availability, skills and experience levels of its personnel. One of NZDF's highest priority strategic objectives focuses on ensuring that NZDF has the right personnel to deliver outputs and outcomes, as shown in red in Figure 12 below. Figure 12: NZDF's balanced scorecard - link between personnel and output delivery This is a landscape diagram. Open it here as a PDF file and print to view (333.6k) NZDF uses three key measures for this strategic objective, as shown in Figures 12, 13 and 14:
Please note that efficiency measures for personnel are still being developed. By measuring total regular force, non-regular force and civilian numbers, as shown in Figure 13 below, NZDF tracks whether overall capacity is increasing or decreasing. This is segmented by Navy, Army and Air Force and compared to planned personnel numbers. Composition of the workforce, including the availability of personnel in each trade and experience level, is a key determinant of NZDF's ability to deliver outputs. Tracking rank and trade shortfalls, as shown in Figure 4, shows whether the composition of the workforce has changed in each rank and trade, and whether strategic initiatives have been effective. Rank and trade shortfall information is complemented by several personnel Key Performance Indicators for the NZDF Operational Preparedness and Reporting System (OPRES), which shows a clear linkage to capability delivery. Due to the sensitive nature of this information, rank and trade data is not currently reported externally Attrition data, which is a leading indicator of capacity and future output delivery, is tracked for each Service. Figure 15 charts some of this data over time. Data is compared to planned attrition levels, then segmented further into ranks and trades for each Service. This shows whether the current NZDF strategic initiatives are having the intended effect in the right areas, or alternative initiatives need to be developed. NZDF are still finalising their efficiency measures for personnel. The hypothetical example in Figure 16 below shows the type of comparison that can be made when including cost measures. Tracking funding flows
Example: tracking funding flows in a sectorFigure 17 below presents an example of how to analyse funding flows. The figure illustrates how $1 billion worth of funding invested in the sector has been tracked through each of the agencies that make up that sector, in order to determine which areas have grown most, and by what proportion. The bolded lines show which outputs have been allocated the most funding. Demonstrating value for money
Example: demonstrating value for money in the transport sectorTable 6 shows various indicators relevant to the land transport sector, and compares them to the investment in safety funding in that sector. The indicators show that increased safety funding in the sector has had a number of positive impacts for New Zealanders. There is no indication, at this time, that diminishing returns have set in. Table 6: Measures of impact in the transport sector Assessing reach and coverage
Example: Assessing housing allocation to those most in needOne of the guiding principles of the Australian Commonwealth State Housing Agreement is that those in greatest need have first access to government-supported housing. The proportion of new allocations to those in greatest need in 2005-06 for public housing is presented in Table 7 below. A high value for this indicator, particularly for short time frames, represents a high degree of access of those in greatest need without these people waiting long periods of time. Table 7: Public housing - proportion of new allocations to those in greatest need, 2005-06
Source: Australian Productivity Commission 22 Measuring intermediate outcomes
Example: Tracking student retention in the Australian education sectorFigure 18 below outlines the performance indicators for the Australian Government's national goals for schooling in the 21st century. It shows the outcome indicators for the overall goals grouped by equity, effectiveness and efficiency. One of the goals is that schooling should develop fully the talents and capacities of all students. Under this goal is the objective to develop fully the talents and capacities of young people through increased participation to higher levels of schooling. A measure for this goal is retention of students between years 10 and 12, contributing to the equity and efficiency indicators. Figure 19 compares the rates of retention by state over time.
22 Steering Committee for the Review of Government Service Provision 2007, Report on Government Services 2007, Productivity Commission, Canberra, www.pc.gov.au/gsp/reports/rogs/2007/housing
The first part of this module explains how to identify and test the causal linkages between different levels of the framework, so that you can to link results back to resources. Critical links include:
The second part discusses how the performance story should inform managerial decision-making. This part underpins improvements in strategy, policy design and service delivery. In each major area of activity, your agency needs to know it has used resources prudently (maximised production), and that outputs are impacting on outcomes to the greatest extent possible.
There are multiple means of exploring linkages, but some simple means are outlined below. Input-output linkagesInput-output linkages are relatively easy to demonstrate, provided that you have homogenous outputs (or at least a relatively homogenous output mix), and good price information. Given that both must be reported under either the Public Finance Act or the Crown Entity Act, most agencies should be able to show how the real prices 23 of major outputs have changed over time. Increased prices are not justified unless intermediate and end outcomes improve. Similarly, improved quality or capability may drive up costs, but to represent value-for-money they must also improve results. So even when output quantity is uncertain, growth in real prices is expected to have a commensurate, attributable effect on intermediate and end outcomes. Output-intermediate outcome and output-outcome linkagesThese linkages can be looked to most simply by plotting against time the quantity and quality of major outputs, and the intermediate outcomes and outcomes that they are supposed to create. Price-per-unit can be a useful proxy, if quality is hard to measure. Allowing for lagged outcomes, you can then see if there is any pattern. When positive change is occurring, correlation scores, regression or other multivariate methods help you gauge the strength of linkages between outputs, and intermediate and end outcomes. Some methods also test the strength of non-output factors on impacts and outcomes. For instance, do markets or improved services best explain falling numbers of beneficiaries? Focusing on major impacts and linkagesLinkages can be complex and confusing. It is therefore important that you keep reporting simple and comprehensible results by focusing on the most important linkages within your intervention logic. One way of identifying the linkages to focus on is to ask simple questions like: "Are we investing significant resources to get this result? and "If this linkage was not strong, would I still invest those resources?" In many cases, services have one or two first order (direct) impacts, which were used to justify funding. Significant measurement effort is often spent on ensuring delivery of major outputs, and confirming their direct impacts. One or more services may also work together to deliver key impacts. In such cases your analytical focus may be on linking aggregate costs to impacts. Subsidiary impacts and second order (enabling or indirect) impacts also exist, but either the lesser importance or scale of these impacts, or attribution challenges (especially with second order contributions) may mean that less effort is invested in reporting these. Determining the nature of linksThe key to discerning where the links between levels sit is to undertake analysis of the performance data and indicators. The data gathered at each level of the measurement framework should give indications as to where the major linkages lie between outputs and intermediate outcomes, and intermediate and end outcomes. The indications will normally come in the form of correlations. In many areas it may be difficult to judge this link precisely, but it is only necessary to judge the relative importance of the links between the levels so that progress can be monitored. For example, the Department of Labour has been able to discern the links between its outputs in the area of building international links with the impacts it is having on the development of broader labour policy and the enhancement of New Zealand's international links, through conducting structured interviews with expert agency staff in this area. Determining cost-effectivenessMeasures of cost effectiveness assess the value-for-money of services, relative to alternate services or previous time periods. It is not a measure of how much money was spent per unit of output - this is efficiency. Links between outputs and impacts must be established before cost-effectiveness can be measured. This is because cost-effectiveness relies on knowing the impact (i.e. effectiveness) of different outputs, and outputs are what we have cost information for. Thus, cost effectiveness is determined by following these steps: 1. Ascertain the costs apportioned to each output. This is a matter of determining how much money has been spent on each activity within an output. It should be relatively straightforward to ascertain the level of expenditure for each output. Moreover, this information should be available within the annual Estimates produced by your agency. 2. Define the logic of how these outputs are linked to impacts. This step is about ensuring that causal links are logically defined between the outputs and impacts, as discussed above. 3. Aggregate output costs to impact level. This step uses the links established above to establish how much cost is being pulled through into each impact. Each impact can then be given an approximate costing. The obvious complexity here is that multiple outputs may contribute to multiple impacts. It may be possible to weight relative costing contributions in some cases. Use of primary and secondary links (described above) should help with this process. If this is not possible, it is best to look at the overall picture and try to discern what overall cost effectiveness was achieved through multiple impacts. 4. Define the overall cost effectiveness. In this step a cost-effectiveness measure is established. The measure will be a summary of how much has been spent in order to achieve a particular level of impact. This will be a combination of the aggregated costs and the impact measure. 5. Iterate the process over time in order to track changes in cost-effectiveness. In order to track effectiveness over time, the above process needs to be reiterated in order to identify trends and changes. 23 Real prices allow for inflation using the most appropriate price index (e.g. CPI, PPI, LPI, etc)
To get value, performance measurement and management processes must be embedded into your institutional planning and management frameworks. Only by doing this can you ensure that performance information gets used in decision making. Figure 1 shows how this is done in a generic sense. You need to consider how information will feed the strategy, policy and operational decision-making processes in your agency and sector. Managing the performance cycle shown in Figure 1 should result in improvements in four key areas:
In summary, performance measurement systems simply allow managers to focus more on improving results. Good measurement empowers good leadersMeasurement and reporting systems are means to an end, not an end in their own right. Having invested in getting good information, you now need to ensure that information gets used well by decision-makers. This means identifying when, e.g. in the annual or strategic planning process, key decisions get made. You can then work backwards from these decision points, to decide who needs to receive what information when. Information packs are typically required to inform:
As your performance measurement framework becomes well developed, you generally will find that the information base is being updated and reformatted to meet the needs of different users. Preparing a development planYou need a development plan to ensure that you have a systematic approach to developing your monitoring system and capabilities, and to establish a baseline against which you can track progress. In producing information packs and measures, you will learn what decision-makers need and where data and measures need improvement. With experience, you will build confidence that your measurement framework is robust, and work to bring down the costs of data gathering and measurement, for instance via case management and reporting systems. Common development objectives are to provide more (or better) comparative measures, and for better linkage and attribution of intermediate and end outcomes, back to your outputs and resources. Reporting expectations also tend to change over time. For instance, your agency may have to respond to new requirements. Recent examples include Cabinet-endorsed processes such as the Review of Accountability Documents (RoADs) and the Capital Asset Management (CAM) system changes. This learning should be captured in a development plan for your performance management system, which identifies the tasks, timelines and resources required to improve over time. This plan provides an agreed path forwards, against which progress can be monitored. Developing capability to maximise measurement benefitsThree capabilities are needed to run and get benefit from performance monitoring systems. First and foremost, leaders and senior managers need to understand and be able to apply the results in their decision-making. Second, technical expertise is required to develop useful measurement and monitoring frameworks, produce attributable measures, and report in a simple, comprehensible manner. Third, the organisation must gather and collate the data required to produce those measures, and acquire comparative statistics where needed. In all of these areas, systems must be developed to ensure reports are useful, standardise methods, acquire data at least cost, and to reduce the costs of analysis as much as possible. Above all else, governance processes must ensure that performance information gets used well. Once your agency has established some performance measurement capability, the knowledge and understanding gained from that process need to be used to enhance the direction of the agency and to indicate where internal changes may need to be made to improve performance. Such changes may come through recruitment, reorganisation, changing processes, strategy development, change of governance, changing systems, staff performance, or re-aligning stakeholder relationships. These key areas of capability are outlined in the Central Agencies' Capability Toolkit. The toolkit also explains how the planning processes used within agencies should focus on outcomes. 24 Iterative development of strategies, policies and plansThe performance measurement process confirms, on an ongoing basis, that major outputs are delivered well and having the expected impact. Aggregate, iterative measurement tracks how well you are progressing towards your performance goals. Ex-ante performance expectations (e.g. targets or improving on historical results) allow you to build a clear picture of performance in key areas and as a whole. Iterative development relies on leaders having the courage to adapt their strategies and plans as the evidence builds that needs, delivery and results may be different from those first envisaged. Decision-makersProgress can only be made with the support of your key decision-makers. The best situation is that strong demand exists from Ministers, Boards and/or Chief Executives, because they realise that they need good information to manage well and meet the expectations of those around them. Both RoADs and CAM are designed to lift these expectations, but the main beneficiaries of improved performance information are your managers and your clients. In the end, any State sector agency needs to be results-oriented so that it can deliver the best results for New Zealanders. Your agency's primary focus should always be on how its activities benefit its clients. But because budgets are always limited, some form of cost-benefit analysis is always required. 24 See publicservice.govt.nz/capability-toolkit
The following bullet point list serves as a checklist for the key steps in the performance measurement process:
The following is a reference list of useful sources on particular aspects of performance measurement for State sector agencies. On performance measurement and outcome frameworksOn stakeholder analysis
On qualitative measurement
On futures techniques
On capability
Attribution - The extent to which an impact or outcome can be directly assigned to the activities undertaken by an agency or agencies. Capability - The mix of powers, systems, skills, infrastructure and information resources needed, now and in the future, to produce and manage the interventions which best contribute to the outcomes that the government is seeking from an agency or sector. Disaggregation - The process of deconstructing an output or outcome into their component parts, or reporting performance separately for different groups, e.g. population groups or product lines. Efficiency - The price of producing a unit of output ('technical efficiency'). Alternatively, it can mean the proportion of output reaching target groups ('allocative efficiency'). To make valid comparisons the outputs or output mixes being compared must be homogenous. Effectiveness - The difference agencies make through the services they provide. Effectiveness focuses on the impact that has been achieved through the delivery of one or more outputs. Impact - What has been achieved at the intermediate outcome and outcome level of the performance measurement framework. Impact measures are attributed to your agency's (or sector's) outputs in a credible way. Intermediate outcome - What articulates the effect that your agency's services and interventions are having on New Zealanders. Intermediate outcomes allow your agency or sector to determine what difference it is making through the services it is providing with its outputs, and to discern progress towards the achievement of outcomes. Intervention - A phrase used to describe a range of actions an agency or agencies will undertake in order to deliver positive change for New Zealanders. Intervention logic - The strategic and/or operational articulation of how one or more interventions will produce desirable outcomes for New Zealanders, including valid measures of success at the output, intermediate outcome and outcome levels of performance. Output - The goods and services agencies deliver as part of their interventions, e.g. implementing policy, running regulatory or control systems, and delivering core services. Outcome - The broad goals your agency or sector must achieve in order to create long-term, positive change for New Zealanders. Outcomes flow directly from ministerial requirements and priorities. An outcome can also be referred to as a 'final outcome' or 'end outcome'. Performance story - The term used to refer to the overall account an agency or sector is able to give about what outcomes it is striving to achieve, how it intends to achieve them, how it will measure progress and how much progress has been made. Performance stories are articulated in annual Statements of Intent and Annual Reports produced by agencies. Results - A tangible statement of what has been achieved by an agency or sector either at the output, outcome or intermediate outcome level. Stakeholder - An individual or group that can affect, or is affected by, the achievement of a particular outcome or outcomes.
|