Progress of WP2: Advancing the Econometric Model, Analysis of 97 Projects
In elevating the methodology of the PHEMAC project, The Climate Change and Environment Program at The Issam Fares Institute for Public Policy and International Affairs at the American University of Beirut has maintained its goal to define, implement, test, and analyze required building blocks to perform impact analysis. This is to validate the impact and efficiency of project performance through Benchmarking of programs and studying their effectiveness and efficiency. This is meant to take place by cataloging gathered validated information and widespread key best practices for replication in the EU-MPC region. The elevated methodology involves the previous methodology in a broader framework of analysis.
In the aim of developing an econometric model that displays the impact of each of the projects, metrics and key performance indicators (KPls) were developed to extract key information. The methodological approach for Participatory Impact Pathways Analysis (PIPA) was used to determine the required assessment tools and methodology to extract key information from the defined KPI outputs and perform PIPA using complementary technologies while studying interrelationships, critical success factors, and resistance factors.
Within the econometric study framework created to assess the impact successfulness of each of the funded projects, the generated information was employed as input defining a composite indicator. These indicators include five main input criteria: commercialization potential, intellectual property rights protection and utilization, entrepreneurial and cultural activities, magnitude of cooperation, and technological advancement.
The framework was broadened to account for more factors based on the ENI-CBC report to create a more comprehensive analysis of each of the submitted projects. Each of the projects is classified into one of the five typologies, which include policy development, replication of good practice, commercialization, applied research, and theoretical research. Based on the previously selected 22 key performance indicators, a project is assigned a score for each of them.
The indicator dealbreakers were removed as they no longer have significance within the new methodology. In line with the initially identified methodology, the 22 indicators are grouped into four project functions. To encompass the standards of different states based on expertise from different fields, experts were requested to provide opinion expert-based benchmarks. The provided benchmarks will be averaged and measured against.
The five functions of the methodology are created to assess the impact pf each project. These functions are broken down into 22 individual metrics. The metrics values are compared to the updated expert-based benchmarks. The values of these metrics were assigned based on a metric survey that was created where ¡Hub users can enter data from their projects.
A scoring formula has been developed to score each individual metric. Each of the metrics is given a score between 1-5. The four project functions include knowledge creation, intellectual commercialization, technology commercialization, and capitalization. Additionally, Each KPI was assigned a relevance score by experts, based on the project type (RIA, IA, MCSA, CA). Every project is also simultaneously assigned an expert assigned weighted score for reference. A final score is calculated for each project.
The results of the model-based metrics and weighted scores are presented in two different output forms, textual and graphic, based on project function. In the aim of further elevating the assessment criteria of projects, the benchmarks against which each of the KPIs is graded is an average of inputted expert-based benchmarks. The project scores were regressed to assess the performance of different types of projects. This resulted in a regression of the final project scores (N=97; mean=2.119; s.d.=0,871) on the 4 types of funding schemes.
The model has a very good R2 goodness of fit measure, whereby the adjusted R2 indicates that 71% of the total variation in Project score can be explained by its relationship to the type of scheme. To better understand the implications of the model, it was used to derive the expected score for each scheme, as well as attendant standard errors, and then a 2-way comparison was conducted.
The results of this analysis show that the highest score is registered for CSA, which is significantly higher than all the others. Second comes MSCA, while last come lA and RIA which scores are insignificantly different.
This updated methodology provides a more comprehensive look at projects and their details allowing for a more just grading. The updated methodology also allows for project assessments that lead to constructive feedback through tools within the PHEMAC framework. Classifying the projects into different kinds of projects allows for a form of custom feedback on each project’s performance.
The updated methodology creates constructive output in graphical and textual forms based on the projects’ functions. It also allows for this output to be based on experts’ analysis.
Image courtesy: RDNE Stock project