Research Article | | Peer-Reviewed

Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality

Received: 3 July 2025     Accepted: 18 July 2025     Published: 30 July 2025
Views:       Downloads:
Abstract

In response to growing demands for accountability and increasing competition within and across sectors, nonprofit organizations have adopted a variety of performance measurement approaches. However, measuring performance alone does not guarantee the success of measurement initiatives, and their intended benefits depend on the effective use of performance information. Given the variation in performance information use across nonprofits, it is essential to understand the factors that facilitate its use. Drawing on survey data from 134 California-based nonprofits (a 16.7% response rate from a randomly selected sample of 802 organizations), this study examines how technical aspects of performance measurement systems influence performance information use, both directly and indirectly. The findings underscore the critical role of targeted training programs that build staff capacity to collect, analyze, and apply performance information effectively. Equipping employees with these skills fosters the data-informed decision-making. Additionally, the study highlights the importance of involving staff in the design and implementation of performance measurement system. When employees help ensure that the system remains relevant, up to date, and integrated into daily operations, its overall quality improves. Ultimately, the research identifies high-quality performance measurement systems as a key driver of performance information utilization. When performance data are generated through well-designed systems, nonprofit managers are more likely to trust and use the information effectively. These insights offer practical guidance for nonprofit leaders aiming to strengthen their performance measurement efforts.

Published in Journal of Public Policy and Administration (Volume 9, Issue 3)
DOI 10.11648/j.jppa.20250903.14
Page(s) 153-162
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Performance Measurement, Training, Employee Involvement, System Quality, Nonprofits

1. Introduction
Due to the growing emphasis on accountability in nonprofit funding and the increasing competition both within and across sectors, Performance Measurement (PM) has received heightened attention in the nonprofit sector . Since demonstrating results is essential for sustaining funding and maintaining public trust, many nonprofits have developed diverse PM methods . As Thomson notes, while adopting a PM system is commendable, the more critical objective is the effective use of Performance Information (PI) to guide decision-making. Nevertheless, the use of PI in the nonprofit sector varies significantly .
In this context, researchers have examined the extent to which PI is utilized and the factors that influence its use. These include individual-level factors such as leadership style, governance preference, public service motivation, and job experience , organizational-level factors such as culture, reform, training, professionalization, and capacity , and external factors such as stakeholder engagement, political support, institutional pressure, and organizational relations .
Despite growing scholarly interest in the use of PI, there remains limited understanding of how technical aspects of the PM process influence the use of PI. In addition, much of the existing research has centered on the government organizations. Given the distinct operational contexts and characteristics of nonprofit organizations, findings from public sector studies may not fully applicable to the nonprofit area. To address this gap, the present study adopts a nonprofit-specific perspective, examining how technical factors related to PM processes shape the use of PI. Ultimately, this study aims to enhance practical knowledge of how nonprofits can more effectively leverage performance data to inform decision-making, strengthen accountability, and realize the intended benefits of PM.
This article is organized as follows. It begins by presenting a series of hypotheses concerning the technical factors that affect PI use. Next, it provides a detailed explanation of the data collection methods. This is followed by a discussion of the empirical findings, along with the study’s theoretical and practical implications and its limitations. The article concludes with a summary of its key contributions.
2. Hypotheses
This study focuses on three technical factors directly related to the PM process that may influence PI use: 1) PM training, 2) PM system quality, and 3) employee involvement in PM system. These factors represent the aspects that organizations can directly control and improve, making them valuable targets for intervention and improvement efforts. By concentrating on these elements, this study aims to provide actionable insights for nonprofit organizations seeking to enhance their PM practices. Additionally, these factors are firmly grounded in established scholarly foundations and supported by empirical evidence.
2.1. PM Training
Using PI requires specialized skills and knowledge for accurate interpretation, integration into daily operations, and communication with stakeholders . However, the literature identifies a lack of these professional skills and knowledge as a significant barrier to effective PM. For instance, Yang and Hsieh note that many government organizations do not have capacity to analyze the performance data. This issue might be even more pronounced in the nonprofit sector, traditionally characterized by a world of amateurs . Specifically, California nonprofits have reported a lack of evaluation capacity as a major challenge to effective PM .
Training can address these issues by educating nonprofit staff and leadership about the importance of PM, providing guidance on effective implementation of measurement systems and improving evaluation capacity . Furthermore, training provides organizational members with insights into the benefits and values of PM, thereby alleviating uncertainty, fear, and cynicism associated with its use . When organizational members comprehend the values, principles, and tools of PM through adequate training, they are more likely to commit to it, thereby increasing the likelihood of using PI. Thus, nonprofit managers and employees who are professionally trained in PM are more likely to use PI than those who are less trained . Conversely, without such training, they may not recognize its benefits, leading to a lack of attention to the value of PI . Hence, this study hypothesizes that:
Hypothesis 1: PM training is positively related to PI use.
2.2. PM System Quality
This study also proposes that PM training influences the PI use indirectly by enhancing the quality of the PM system. When managers and employees lack adequate PM training, they may fail to recognize its importance, benefits, and criteria . Such a gap in understanding can lead to insufficient attention to essential components of a well-designed PM system such as specific and measurable indicators and clear linkage between progress and outcomes. Conversely, providing comprehensive PM training equips organizational members with the necessary knowledge and skills to develop high-quality measurement systems. Through training, staff become better able to select appropriate performance indicators, align performance metrics with strategic goals, and iteratively refine measurement processes. This informed engagement significantly improves the quality of PM system.
Empirical studies support this view. For instance, Yang and Hsieh observed that training positively impacted the range of performance measures which is a critical component of well-designed PM system. Similarly, Cavalluzzo and Ittner found a positive link between the provision of PM training and the effective implementation of PM systems. Lastly, Teeroovengadum et al. emphasized that organizations investing in more staff training are more likely to possess strong and comprehensive PM systems. Therefore, it is reasonable to expect that:
Hypothesis 2: PM training is positively related to PM system quality.
In the literature, the quality of PM system has been frequently highlighted as a critical factor. When the PM system is inadequately designed, decision-makers are unable to trust the information it produced, thereby diminishing its value and usefulness . Joyce and Tompkins also note that a primary reason that public managers do not use PI is the skepticism toward its quality. Conversely, a high-quality PM system enhances the credibility of the PI, enabling decision makers to utilize it for diverse purposes . A rigorous PM system can address essential questions, such as the efficiency of organizational activities, the outcomes achieved, and the extent to which client or citizen expectations are met . Consequently, a robust measurement system is likely to produce reliable and valuable PI, thus increasing the likelihood of its use.
Empirical findings lend support to this argument. For instance, Yang and Hsieh observed that the adoption of a wide range of performance indicators is significantly related to the managerial effectiveness of PM. Similarly, LeRoux and Wright found a comprehensive set of performance measures - such as workload and output measures, unit cost and efficiency measures, outcome measures, client satisfaction, external audits, and industry standards - positively impacts the effectiveness of strategic decision-making within nonprofit organizations. Additionally, Julnes and Holzer found that an extensive development of performance measures has the most influence on the implementation - the actual use of performance measures in strategic planning, resource allocation, program management, monitoring, evaluation, and reporting to stakeholders. Eliuz et al. further demonstrated that the adoption of high-quality performance measures is a crucial prerequisite for achieving the intended outcomes of PM. Following this line of logic, I expect that:
Hypothesis 3: PM system quality is positively related to PI use.
2.3. Employee Involvement in PM System
The final technical factor examined in this study is employee involvement in PM system. I propose that employee involvement has a positive effect on the quality of the PM system (Hypothesis 4), which subsequently influences the PI use (Hypothesis 3). As Hypothesis 3 was addressed in the previous section, this section focuses on the rationale for Hypothesis 4.
Several researchers have emphasized the importance of participatory approach in PM. For example, Kaplan and Norton argued that successful implementation of the Balanced Scorecard requires active participation of low-level employees in developing performance metrics. This bottom-up approach enables employees to identify relevant performance indicators and understand their responsibilities, fostering alignment between individual roles and organizational goals, which ultimately enhances the quality of PM system . Similarly, Goh highlighted stakeholder engagement as a prerequisite for building effective PM system. Lee further contended that without meaningful communication with key stakeholders, organizations may struggle to identify appropriate metrics, establish performance goals, or collect accurate data.
Involving employees in the design and ongoing refinement of PM systems offers multiple advantages that contribute to system quality. First, employees bring contextual knowledge and practical insights from their day-to-day roles, which can help ensure that selected indicators are realistic, relevant, and aligned with organizational goals and mission. Their contributions can enhance the precision of performance targets and ensure that measurement efforts reflect actual workflows and organizational priorities. Additionally, employee feedback facilitates continuous improvement of the PM system . Their regular input helps organizations identify gaps, adjust outdated metrics, and adapt to changing internal or external conditions. This iterative process promotes the quality of the PM system. Based on this reasoning, the present study predicts that:
Hypothesis 4: Employee involvement in PM system is positively related to PM system quality.
3. Methods
3.1. Data Collection
This research investigates the proposed hypotheses using survey data obtained from California based nonprofit organizations. To construct the survey sample, data were extracted from the 2019 Core Data file provided by the National Center for Charitable Statistics (NCCS). To capture organizations likely to have formalized PM processes, small nonprofits with total expenses below $250,000 were excluded. The analysis concentrated on the four most commonly represented nonprofit subsectors: arts (NTEE code A), education (B), health (E), and human services (P). Based on these parameters, 9,128 nonprofits met the inclusion criteria, which include 1,726 in arts, 3,277 in education, 1,576 in health, and 2,549 in human services. A power analysis was conducted to determine the appropriate sample size. Following Cohen’s guidelines, the study adopted a medium effect size (f² = .15), a significance level of .05, and a statistical power threshold of .80. Accordingly, a total of 802 organizations were randomly selected, and survey invitations were distributed via the Qualtrics platform to executive directors or presidents. Data collection occurred from September to December 2023, yielding 134 completed responses. This corresponds to a 16.7% response rate.
Given the use of a single data source, the potential for Common Method Bias (CMB) was considered. To assess this risk, I performed Harman’s one-factor test following the guidelines outlined by Podsakoff et al. . Results from the exploratory factor analysis showed no evidence of a single dominant factor explaining the majority of the variance, indicating that CMB does not pose a substantial threat in this study.
While online surveys often encounter low response rates, the 16.7% participation rate observed in this research warrants careful consideration. A low response rate may lead to nonresponse bias, wherein survey participants differ systematically from non-respondents in ways that could affect the validity and generalizability of the study’s findings . To assess the potential for nonresponse bias, comparisons were made between respondents, non-respondents, and the broader population. Specifically, a one-sample t-test comparing organizational age and total expenses revealed no significant differences. Furthermore, a chi-square analysis assessing the distribution across NTEE subsectors also showed no significant differences. These results support the conclusion that the respondent sample is representative of the population in terms of organizational age, total expenses, and subsector distribution.
Nonetheless, the possibility of unobserved differences cannot be entirely ruled out. While the present study employed a random sampling method, future research could consider using stratified sampling to better mitigate the risk of nonresponse bias. Stratified sampling ensures proportional representation across key organizational characteristics (e.g., size, subsector, geographic region). This approach would allow for more precise comparisons and enable the application of post-stratification weights to enhance the validity and generalizability of the findings .
3.2. Variable Measurement
As clearly defined constructs require a robust theoretical basis , the constructs used in this study were formulated using survey items that have been tested and validated in prior studies.
3.2.1. Dependent Variable: PI Use
The PI use was assessed by asking survey participants to indicate the extent to which their organization utilized performance data for nine specific purposes, including
(1) Improving the efficiency or effectiveness of the program or organization;
(2) Assessing whether the program or organizational performance is consistent with its mission;
(3) Helping determine expenditure priorities;
(4) Satisfying funder requirements;
(5) Informing the board of directors;
(6) Motivating staff and volunteers;
(7) Demonstrating legitimacy to potential donors;
(8) Promoting and marketing the program or organization;
(9) Obtaining a seal or certification for the program/organization .
Responses were rated on a five-point Likert scale ranging from 1 (not at all) to 5 (to a very great extent), and the average score across the nine items was used to construct the overall PI use scale. The scale showed acceptable internal consistency, with a Cronbach’s alpha coefficient of 0.871.
3.2.2. Predictor Variables
PM training was assessed using three survey statements:
(1) Managers receive adequate training and information about the performance measurement system;
(2) Managers can use the performance measurement tools as intended;
(3) Adequate training has been provided to ensure employees understand the performance measurement system .
Respondents rated these items on a five-point Likert scale ranging from strongly disagree (1) to strongly agree (5). An average score across the three items was calculated to form a composite index of PM training. The internal consistency of the scale was strong, with a Cronbach’s alpha of 0.856, exceeding the commonly accepted threshold of 0.70 recommended by Carmines and Zeller .
PM system quality was evaluated through six survey items:
(1) The program goals indicate the intended effect of the program on the population and/or its needs;
(2) The performance indicators are stated in specific and measurable terms;
(3) The performance indicators are valid measures of performance;
(4) The performance indicators effectively measure progress toward the achievement of outcomes;
(5) The performance measurement system generates reliable information;
(6) The performance measurement system is effectively designed to measure progress toward the achievement of outcomes .
Each item was rated on a five-point Likert scale ranging from strongly disagree (1) to strongly agree (5), and an average of the six responses was calculated to generate a composite score reflecting overall PM system quality. The internal reliability of the scale was high, with a Cronbach’s alpha of 0.875, indicating strong consistency and suitability for further analysis.
Employee involvement in PM was assessed using three survey items:
(1) Employees participate in designing the performance measurement system;
(2) Employees are involved in selecting performance measures;
(3) Employees provide feedback to the performance measurement system .
Each item was measured on a five-point Likert scale ranging from strongly disagree (1) to strongly agree (5), and the average of these responses was used to create a composite indicator of employee involvement in PM. The reliability of this scale was confirmed with a Cronbach’s alpha score of 0.866, reflecting acceptable internal consistency for research purposes.
Lastly, the present study incorporated organizational age and size as control variables. The age of each organization was calculated by subtracting its founding year from 2023, the year when the survey was administered. Additionally, organizational size was measured using total expenditure data obtained from the NCCS 2019 Core Data File.
The descriptive statistics of the study variables are displayed in Table 1.
Table 1. Descriptive Statistics.

Variable

N

Mean

Min

Max

SD

PM Training

126

3.37

1.33

5.00

0.88

PM System Quality

115

3.91

1.33

5.00

0.62

Employee Involvement in PM

123

3.31

1.00

5.00

1.02

PI Use

115

3.61

1.33

5.00

0.75

Organizational Age

115

38.02

2.00

129.00

20.48

Total Expenses

134

7,007,323

196,601

278,951,100

24,833,987

4. Results
Prior to conducting multivariate analysis, I assessed the construct validity of the research variables by applying correlation analysis and Confirmatory Factor Analysis (CFA) based on Garson’s approach. First, I examined the items within each construct, confirming at least moderate correlations (0.35-0.81) and statistical significance (p < 0.05). Furthermore, the CFA results showed that all items significantly loaded onto their designated constructs (p < 0.05), with factor loadings falling between 0.58 and 0.92. Based on these finding, the constructs demonstrated acceptable validity.
This research explored both direct effects and indirect effects of variables through mediating constructs. Traditional regression methods often fail to capture these mediated pathways, which can result in incomplete conclusions. To address this, I employed Structural Equation Modeling (SEM), a statistical technique that enables the simultaneous analysis of both direct and indirect relationships between exogenous and endogenous variables . The SEM analysis was conducted using AMOS version 28.0, which applies the maximum likelihood estimation approach. All computations were based on the variance-covariance matrix. For this study, a relatively simple type of SEM (commonly referred to as path analysis) was applied.
The path analysis results are presented in Figure 1. The model explains 30.0% of the variance in PI use (R² = .300), and the findings align with the proposed hypotheses. Hypothesis 1 is supported, with a significant positive relationship between PM training and PI use (β = .240, p < .01). Hypothesis 2 is also supported, showing that PM training positively influences PM system quality (β = .253, p < .01). The results support Hypothesis 3, indicating a significant positive impact of PM system quality on PI use (β = .427, p < .01). Finally, Hypothesis 4 is supported, demonstrating that employee involvement in PM significantly contributes to PM system quality (β = .145, p < .1). Collectively, these findings highlight not only a direct effect of PM training on PI use but also reveal that both PM training and employee engagement in PM indirectly affect the PI use through improvements in the PM system quality.
Figure 1. Path Model for the PI Use.
Table 2 outlines the overall model fit indicators. The chi-square value, calculated using maximum likelihood estimation, is 0.018 with one degree of freedom. This results in a chi-square to degrees of freedom ratio of 0.018, aligning with the commonly accepted guideline that this ratio should remain under 2 . Furthermore, the root mean square error of approximation (RMSEA) falls below the 0.10 cutoff suggested by Browne and Cudeck . The comparative fit index (CFI), normed fit index (NFI), and Tucker-Lewis index (TLI) all exceed the 0.95, suggesting that the proposed path model demonstrates a strong fit with the observed data .
Table 2. Model Fit Statistics of the Path Model.

Fit Measure

Value

Chi-square

0.018

Degrees of freedom

1

Probability level

0.893

Root mean square error of approximation (RMSEA)

0.000

Comparative fit index (CFI)

1.000

Normed fit index (NFI)

1.000

Tucker-Lewis index (TLI)

1.087

5. Discussion
This study offers several practical insights for nonprofit leaders seeking to improve their PM practices. First, training programs that enhance employees' understanding of PM should be implemented to improve their ability to effectively collect, interpret, and use performance data for decision-making. Many nonprofit organizations, particularly smaller ones, struggle with limited evaluation capacity . The present study underscores the importance of building that capacity through targeted training initiatives. These programs should cover not only data collection and analysis but also interpretation and application of PI. For instance, the training initiatives could include practical exercises and real-world examples to demonstrate how PI can be used to inform decision-making. By equipping staff with the skills to interpret and use performance data, nonprofits can foster an environment where data-driven decision-making becomes standard practice, ultimately driving performance improvements.
It is also crucial to note that nonprofits often struggle to secure funding for training and capacity building, as many grants are tied to specific programs or projects with visible outputs . This funding structure might impede nonprofits' ability to develop the skills necessary for effective PM. Funders and policymakers need to recognize the broader context of nonprofit operations and support the training and capacity building initiatives that equip organizations with the necessary tools to effectively utilize performance data.
In addition, this study underscores the critical role of employee engagement in enhancing the quality of PM system, which in turn promotes the use of PI. Practically, this suggests that nonprofits seeking effective PM should actively involve employees in the design, implementation, and refinement of PM system. Employee participation ensures that performance indicators are contextually relevant and aligned with operational realities, increasing both the credibility of performance data. Moreover, fostering a culture of inclusion and feedback not only improves PM system quality but also encourages greater ownership, accountability, and job satisfaction among staff (25). Nonprofit managers should prioritize training, open communication, and participative planning to harness employee insights. The present study’s findings highlight the interconnected roles of employee participation, system quality, and information use in nonprofits’ PM.
Lastly, the findings highlight the importance of investing in high-quality PM systems. When performance data is generated through well-designed systems, nonprofit managers are more likely to trust and use it for diverse purposes . To develop a high-quality PM system that generates actionable PI, nonprofit leaders should create user-friendly dashboards that facilitate easy access and interpretation of performance data for staff at all levels . These dashboards should be designed with clear and intuitive interfaces, enabling staff to quickly grasp key performance trends and make data-driven decisions in their day-to-day operations. Additionally, developing an effective PM system should be an iterative process. This involves continuously seeking feedback from internal and external stakeholders to refine the system as the organization's needs evolve and new insights emerge from the collected data . By implementing these practical strategies, nonprofits can adapt their measurement systems to changing needs, maximize the value of the PI gathered, and ensure that performance data is effectively utilized in the decision-making process.
Although this study employed a robust methodological approach and offers valuable insights for nonprofit organizations, its limitations should be addressed to guide future research. First, this study is based on survey data with a low response rate of 16.7%. While nonresponse bias was assessed using one-sample t-tests and chi-square analyses, the small sample limits the generalizability of the findings. The geographic focus on California further constrains broader applicability. As such, the findings should be interpreted with caution, considering contextual factors such as demographics, political conditions, and environmental influences specific to the state. To strengthen external validity and generalizability of the findings, future research should aim to include larger and more diverse samples. For instance, multi-state comparisons could offer valuable insights by capturing variation in nonprofit contexts across different regulatory, economic, and cultural environments. Such comparative designs would allow researchers to assess whether the relationships observed in this study hold across diverse state settings. In addition, the present study relied on cross-sectional data to test the hypotheses. Although this approach has been successfully utilized and validated in previous research , its inherent limitation lies in capturing data at one point in time, which prevents the analysis of temporal order necessary for establishing causal relationships . Future studies should consider longitudinal designs, which can reveal causal dynamics over time that cross-sectional data might not capture.
6. Conclusions
Based on survey data from 134 California-based nonprofit organizations, this study investigates how PI use is influenced by three technical aspects of PM process: PM training, employee involvement in PM system, and PM system quality. Although these factors have been examined individually in prior research, this study is the first to analyze them collectively within a unified framework tailored to the nonprofit context. By integrating these elements, the study demonstrates that PI use is facilitated not only by discrete interventions but also by the alignment of organizational practices (e.g., training and employee engagement) with system-level factors such as PM system quality. This dynamic perspective advances existing PM research by highlighting the interdependence between capacity-building efforts and the design of PM systems. Furthermore, by utilizing SEM, the present study examines both the direct effect of PM training on the use of PI and the indirect effects of PM training and employee involvement, as mediated by the quality of the PM system. This methodological approach highlights the significance of examining intermediary mechanisms in PM research.
This study also offers important practical contributions for nonprofit leaders seeking to strengthen their PM efforts. It highlights the value of targeted training programs that build staff capacity to collect, analyze, and apply PI effectively. By equipping employees with these essential skills, nonprofits can foster data-informed decision-making. The findings also emphasize the importance of involving employees in the design and implementation of PM systems, ensuring that performance indicators are relevant, usable, and aligned with day-to-day operations. Such participation enhances system quality and ultimately promote the PI use. Finally, the study underscores the importance of the high-quality PM systems as the key determinant of PI use. When performance data are produced through well-designed systems, nonprofit managers are more likely to trust the information and apply it effectively in decision-making.
Measuring performance itself does not guarantee the success of PM, and its benefits can be achieved when PI is effectively used. In this context, the PI use is regarded as a tangible and tractable indicator of the success of PM . By gaining a deeper understanding of the factors that shape the PI use, nonprofit leaders can enhance the effectiveness of PM initiatives. Ultimately, effective PM can serve as a strategic instrument for organizational learning, continuous improvement, and enhanced mission achievement.
Abbreviations

CFA

Confirmatory Factor Analysis

NCCS

National Center for Charitable Statistics

NTEE

National Taxonomy of Exempt Entities

PI

Performance Information

PM

Performance Measurement

SEM

Structural Equation Modeling

Author Contributions
Chongmyoung Lee is the sole author. The author read and approved the final manuscript.
Data Availability Statement
The data is available from the corresponding author upon reasonable request.
Funding
This work is not supported by any external funding.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] Azam, M., & Bouckaert, G. (2025). How does performance-based budgeting reform affect the extent of performance information use? An empirical study of Indonesia. International Review of Administrative Sciences, 91(2), 237-258.
[2] Ben-Michael, E., Feller, A., & Hartman, E. (2024). Multilevel calibration weighting for survey data. Political Analysis, 32(1), 65-83.
[3] Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238-246.
[4] Berman, E., & Wang, X. (2000). Performance measurement in U.S. counties: Capacity for reform. Public Administration Review, 60(5), 409-420.
[5] Botcheva, L., White, C. R., & Huffman, L. C. (2002). Learning culture and outcomes measurement practices in community agencies. American Journal of Evaluation, 23(4), 421-434.
[6] Bourdeaux, C., & Chikoto, G. (2008). Legislative influences on performance management reform. Public Administration Review, 68(2), 253-265.
[7] Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 136-162). Sage Publications Inc.
[8] Carman, J. G. (2009). Nonprofits, funders, and evaluation: Accountability in action. The American Review of Public Administration, 39(4), 374-390.
[9] Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment (Vol. 17). Sage Publications.
[10] Cavalluzzo, K. S., & Ittner, C. D. (2004). Implementing performance measurement Innovations: Evidence from government. Accounting, Organizations & Society, 29(3-4), 243-267.
[11] Cepiku, D., Mastrodascio, M., & Wang, W. (2024). Factors shaping the use of performance information by public managers. Public Management Review, 1-22.
[12] Choi, Y., & Woo, H. (2022). Understanding diverse types of performance information use: evidence from an institutional isomorphism perspective. Public Management Review, 24(12), 2033-2052.
[13] Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Lawrence Earlbaum Associates.
[14] Desmidt, S., & Meyfroodt, K. (2024). Unlocking politicians’ potential: what fosters purposeful use of performance information in support of voice?. Local Government Studies, 50(1), 150-173.
[15] Do Adro, F., Fernandes, C. I., Veiga, P. M., & Kraus, S. (2021). Social entrepreneurship orientation and performance in non-profit organizations. International Entrepreneurship and Management Journal, 17(4), 1591-1618.
[16] Ebrahim, A., & Rangan, V. K. (2014). What impact? A framework for measuring the scale and scope of social performance. California management review, 56(3), 118-141.
[17] Eckerd, A., & Moulton, S. (2011). Heterogeneous roles and heterogeneous practices: Understanding the adoption and uses of nonprofit performance evaluations. American Journal of Evaluation, 32(1), 98-117.
[18] Eliuz, S., Kapucu, N., Ustun, Y., & Demirhan, C. (2017). Predictors of an effective performance measurement system: Evidence from municipal governments in Turkey. International Journal of Public Administration, 40(4), 329-341.
[19] Garson, G. D. (2016). Validity & reliability. Statistical Associates Publishers.
[20] Gazley, B., & Nicholson-Crotty, J. (2018). What drives good governance? A structural equation model of nonprofit board performance. Nonprofit and Voluntary Sector Quarterly, 47(2), 262-285.
[21] Goh, S. C. (2012). Making performance measurement systems more effective in public sector organizations. Measuring business excellence, 16(1), 31-42.
[22] Gomes, P., Mendes, S. M., & Carvalho, J. (2017). Impact of PMS on organizational performance and moderating effects of context. International journal of productivity and performance management, 66(4), 517-538.
[23] Hatry, H., Lampkin, L., Morley, E., & Cowan, J. (2003). How and why nonprofits use outcome information. Washington, DC: The Urban Institute.
[24] Hwang, H., & Powell, W. W. (2009). The rationalization of charity: The influences of professionalism in the nonprofit sector. Administrative Science Quarterly, 54(2), 268-298.
[25] Jang, S., Chung, Y., & Son, H. (2023). Employee participation in performance measurement system: focusing on job satisfaction and leadership. International Journal of Productivity and Performance Management, 72(7), 2119-2134.
[26] Joyce, P. G., & Tompkins, S. (2002). Using performance information for budgeting: Clarifying the framework and investigating recent state experience. In K. Newcomer, E. Jennings, C. Broom, & A. Lomax (Eds.), Meeting the challenges of performance-oriented government (pp. 61-96). American Society for Public Administration.
[27] Julnes, P. D. L., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public administration review, 61(6), 693-708.
[28] Kaplan, R. S. and Norton, D. P. (2001), The Strategy-focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Boston, MA: Harvard Business school press.
[29] Karakose, T., Yirci, R., & Papadakis, S. (2022). Examining the associations between COVID-19-related psychological distress, social media addiction, COVID-19-related burnout, and depression among school principals and teachers through structural equation modeling. International journal of environmental research and public health, 19(4), 1951.
[30] Kroll, A. (2015). Drivers of performance information use: Systematic literature review and directions for future research. Public Performance & Management Review, 38(3), 459-486.
[31] Kroll, A., & Vogel, D. (2014). The PSM-leadership fit: A model of performance information use. Public administration, 92(4), 974-991.
[32] Lee, C. (2020). Understanding the diverse purposes of performance information use in nonprofits: An empirical study of factors influencing the use of performance measures. Public Performance & Management Review, 43(1), 81-108.
[33] Lee, C. (2021). Factors influencing the credibility of performance measurement in nonprofits. International Review of public Administration, 26(2), 156-174.
[34] Lee, C., & Clerkin, R. M. (2017). Exploring the use of outcome measures in human service nonprofits: Combining agency, institutional, and organizational capacity perspectives. Public Performance & Management Review, 40(3), 601-624.
[35] Lee, C., & Nowell, B. (2015). A framework for assessing the performance of nonprofit organizations. American Journal of Evaluation, 36(3), 299-319.
[36] LeRoux, K., & Wright, N. S. (2010). Does performance measurement improve strategic decision making? Findings from a national survey of nonprofit social service agencies. Nonprofit and Voluntary Sector Quarterly, 39(4), 571-587.
[37] Lerusse, A., & Van de Walle, S. (2022). Buying from local providers: The role of governance preferences in assessing performance information. Public Administration Review, 82(5), 835-849.
[38] Moxham, C. (2009). Performance measurement: Examining the applicability of the existing body of knowledge to nonprofit organisations. International Journal of Operations & Production Management, 29(7), 740-763.
[39] Moynihan, D. P., & Ingraham, P. W. (2004). Integrative leadership in the public sector: A model of performance-information use. Administration and Society, 36(4), 427-453.
[40] Moynihan, D. P., & Pandey, S. K. (2010). The big question for performance management: Why do managers use performance information? Journal of Public Administration Research and Theory, 20(4), 849-866.
[41] Moynihan, D. P., Pandey, S. K., & Wright, B. E. (2012). Setting the table: How transformational leadership fosters performance information use. Journal of Public Administration Research and Theory, 22(1), 143-164.
[42] Naslund, D., & Norrman, A. (2019). A performance measurement system for change initiatives: An action research study from design to evaluation. Business Process Management Journal, 25(7), 1647-1672.
[43] Ngai, S. S. Y., Cheung, C. K., Ng, Y. H., Li, Y., Chen, C., Wang, X., & Yu, E. N. H. (2025). Enhancing the Organizational Evaluation Capacity of NGOs: Results of a Quasi-Experimental Study. Research on Social Work Practice, 1-18.
[44] Pearl, J., & Verma, T. S. (1995). A theory of inferred causation. In Studies in Logic and the Foundations of Mathematics (Vol. 134, pp. 789-811). Elsevier.
[45] Pfiffner, R., Ritz, A., & Brewer, G. A. (2021). Performance information use under financial stress: How do public, nonprofit, and private organizations differ?. Public Performance & Management Review, 44(1), 1-27.
[46] Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903.
[47] Poole, D. L., Nelson, J., Carnahan, S., Chepenik, N. G., & Tubiak, C. (2000). Evaluating performance measurement systems in nonprofit agencies: The program accountability quality scale (PAQS). American Journal of Evaluation, 21(1), 15-26.
[48] Scheaf, D. J., Loignon, A. C., Webb, J. W., & Heggestad, E. D. (2023). Nonresponse bias in survey-based entrepreneurship research: A review, investigation, and recommendations. Strategic Entrepreneurship Journal, 17(2), 291-321.
[49] Sharma, B., & Wanna, J. (2005). Performance measures, measurement and reporting in government organisations. International Journal of Business Performance Management, 7(3), 320-333.
[50] Tate, R. L. (2003). Performance Measure Certification in Maricopa County A key component of Maricopa County's managing for results initiative, performance measure certification is inspiring confidence in the reliability of reported performance information. Government Finance Review, 19(1), 6-9.
[51] Teeroovengadum, V., Nunkoo, R., & Dulloo, H. (2019). Influence of organisational factors on the effectiveness of performance management systems in the public sector. European Business Review, 31(3), 447-466.
[52] Thomson, D. E. (2010). Exploring the role of funders’ performance reporting mandates in nonprofit performance measurement. Nonprofit and Voluntary Sector Quarterly, 39(4), 611-629.
[53] Tung, A., Baird, K., & Schoch, H. P. (2011). Factors influencing the effectiveness of performance measurement systems. International Journal of Operations & Production Management, 31(12), 1287-1310.
[54] Vignieri, V., & Grippi, N. (2024). Fostering the “Performativity” of Performance Information Use by Decision-Makers through Dynamic Performance Management: Evidence from Action Research in a Local Area. Systems, 12(4), 115.
[55] Wiepking, P., & de Wit, A. (2024). Unrestricted funding and nonprofit capacities: Developing a conceptual model. Nonprofit Management and Leadership, 34(4), 801-824.
[56] Willems, J., Jegers, M., & Faulk, L. (2016). Organizational effectiveness reputation in the nonprofit sector. Public performance & management review, 39(2), 454-475.
[57] Wolk, A., Dholakia, A., & Kreitz, K. (2009). Building a performance measurement system: Using data to accelerate social impact. Cambridge, MA: Root Cause.
[58] Yang, K., & Hsieh, J. Y. (2007). Managerial effectiveness of government performance measurement: testing a middle‐range model. Public Administration Review, 67(5), 861-879.
Cite This Article
  • APA Style

    Lee, C. (2025). Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality. Journal of Public Policy and Administration, 9(3), 153-162. https://doi.org/10.11648/j.jppa.20250903.14

    Copy | Download

    ACS Style

    Lee, C. Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality. J. Public Policy Adm. 2025, 9(3), 153-162. doi: 10.11648/j.jppa.20250903.14

    Copy | Download

    AMA Style

    Lee C. Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality. J Public Policy Adm. 2025;9(3):153-162. doi: 10.11648/j.jppa.20250903.14

    Copy | Download

  • @article{10.11648/j.jppa.20250903.14,
      author = {Chongmyoung Lee},
      title = {Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality},
      journal = {Journal of Public Policy and Administration},
      volume = {9},
      number = {3},
      pages = {153-162},
      doi = {10.11648/j.jppa.20250903.14},
      url = {https://doi.org/10.11648/j.jppa.20250903.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.jppa.20250903.14},
      abstract = {In response to growing demands for accountability and increasing competition within and across sectors, nonprofit organizations have adopted a variety of performance measurement approaches. However, measuring performance alone does not guarantee the success of measurement initiatives, and their intended benefits depend on the effective use of performance information. Given the variation in performance information use across nonprofits, it is essential to understand the factors that facilitate its use. Drawing on survey data from 134 California-based nonprofits (a 16.7% response rate from a randomly selected sample of 802 organizations), this study examines how technical aspects of performance measurement systems influence performance information use, both directly and indirectly. The findings underscore the critical role of targeted training programs that build staff capacity to collect, analyze, and apply performance information effectively. Equipping employees with these skills fosters the data-informed decision-making. Additionally, the study highlights the importance of involving staff in the design and implementation of performance measurement system. When employees help ensure that the system remains relevant, up to date, and integrated into daily operations, its overall quality improves. Ultimately, the research identifies high-quality performance measurement systems as a key driver of performance information utilization. When performance data are generated through well-designed systems, nonprofit managers are more likely to trust and use the information effectively. These insights offer practical guidance for nonprofit leaders aiming to strengthen their performance measurement efforts.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Unpacking the Technical Determinants of Performance Information Use in Nonprofits: The Role of Training, Employee Involvement, and System Quality
    AU  - Chongmyoung Lee
    Y1  - 2025/07/30
    PY  - 2025
    N1  - https://doi.org/10.11648/j.jppa.20250903.14
    DO  - 10.11648/j.jppa.20250903.14
    T2  - Journal of Public Policy and Administration
    JF  - Journal of Public Policy and Administration
    JO  - Journal of Public Policy and Administration
    SP  - 153
    EP  - 162
    PB  - Science Publishing Group
    SN  - 2640-2696
    UR  - https://doi.org/10.11648/j.jppa.20250903.14
    AB  - In response to growing demands for accountability and increasing competition within and across sectors, nonprofit organizations have adopted a variety of performance measurement approaches. However, measuring performance alone does not guarantee the success of measurement initiatives, and their intended benefits depend on the effective use of performance information. Given the variation in performance information use across nonprofits, it is essential to understand the factors that facilitate its use. Drawing on survey data from 134 California-based nonprofits (a 16.7% response rate from a randomly selected sample of 802 organizations), this study examines how technical aspects of performance measurement systems influence performance information use, both directly and indirectly. The findings underscore the critical role of targeted training programs that build staff capacity to collect, analyze, and apply performance information effectively. Equipping employees with these skills fosters the data-informed decision-making. Additionally, the study highlights the importance of involving staff in the design and implementation of performance measurement system. When employees help ensure that the system remains relevant, up to date, and integrated into daily operations, its overall quality improves. Ultimately, the research identifies high-quality performance measurement systems as a key driver of performance information utilization. When performance data are generated through well-designed systems, nonprofit managers are more likely to trust and use the information effectively. These insights offer practical guidance for nonprofit leaders aiming to strengthen their performance measurement efforts.
    VL  - 9
    IS  - 3
    ER  - 

    Copy | Download

Author Information