Evaluation Criteria

Forecasting Prize: Evaluation Criteria

After conducting an eligibility screen, USAID will review the predicted values submitted. The metric that will be used to evaluate the models is the mean absolute scaled error (MASE) of the predictions. The error will be calculated for each prediction in the submission and then averaged across all site level predictions for each contraceptive. The lower the score the better. More information on calculating MASE can be found on Wikipedia.

Evaluation Metric

The best performing model prize winner and second-best performing model prize winner will be awarded funding under USAID’s Innovation Incentive Award Authority. Innovation Incentive Award Authority awards are issued as a funds transfer to the winners’ bank accounts and all award monies may be used at the winning teams’ discretion.

The model which predicts future consumption, per product and service delivery site, with the highest accuracy will receive a prize award of 20,000 USD, and the model with the second highest accuracy will receive an award of 5,000 USD.

USAID retains sole and absolute discretion to declare winners and award all prizes. Any decision may not be challenged by any of the competitors participating in this contest. All decisions will be final and not subject to review.

Field Implementation Grant EOI: Evaluation Criteria

First, USAID will conduct an eligibility screening of all organizations who expressed desire to compete for the Field Implementation Grant. Among organizations who are confirmed to be eligible, USAID will create a shortlist of EOIs from organizations who submitted the most accurate models for forecasting future consumption. The shortlisted EOIs will then be evaluated by an expert judging panel, per the criteria below. All members of the judging panel will sign Non-Disclosure Agreements (NDA), conflict of interest forms, and statements acknowledging that they make no claim to the intellectual property developed by competitors or relevant partners.

The expert judging panel will evaluate the shortlisted solutions against the following criteria. Please note all criteria areas are equally weighted.

Understanding of the Problem (500 words): Articulation of the problem of contraceptive forecasting and overall understanding of why and how an intelligent method might be utilized, particularly in Côte d’Ivoire. This may include: key challenges of forecasting contraceptive consumption at the service delivery site level in Côte d’Ivoire; why intelligent forecast methods are relevant to addressing the key challenges in contraceptive forecasting; and how intelligent forecasting methods can be leveraged to address challenges and improve the accuracy of contraceptive forecasting.

Local Applicability (500 words): Describe how the proposed model will address local needs and context in the Côte d’Ivoire public sector health system. This may include: their connections to the Côte d’Ivoire health system; how their intelligent forecasting model could work as part of the system; how they intend to interface with stakeholders; how they will leverage networks and partners to inform the design, development, and implementation of the intelligent forecasting model in Côte d’Ivoire; how they will address challenges, gaps, and opportunities specific to deploying the intelligent forecasting solution in that country; and their connections to the country, which may include a presence or partners in the country.

Implementation Capacity (500 words): The capacity of the applicant to deliver the project, including details of how they will staff and manage the grant. The competitor’s response should include: their approach to staffing, partnering, managing, and implementing the grant, including how it addresses gender; the sequence of activities they intend to undertake to implement this project; and how the model will be maintained over time, keeping in mind factors such as local data science capacity for model updates, accounting for model drift, and/or addressing bugs, etc.

Past Performance (250 words):
The competitor’s past experience implementing similar projects. The competitor’s response should include: past experience implementing projects of similar scope and in similar contexts; and three references who can speak to past performance and ability.

Concept Note: Evaluation Criteria

Competitors who are invited to co-creation will develop and submit concept notes. Concept notes will be reviewed and evaluated by an expert USAID review board. The detailed concept note evaluation criteria will be provided at the beginning of the co-creation effort.

Concept notes will be evaluated as follows:
  • A Red Light: The concept note will not proceed.

  • A Yellow Light: The concept note will revert back to the competitors for additional clarification and co-creation. Competitors will resubmit the review board, who will make a decision. Note that a competitor may go through multiple rounds of review and revision.

  • A Green Light: One concept note will receive a ‘Green Light’ indicating the entry merits an award.

To ensure you get up-to-the minute details, please sign up for our email updates.

Competitions for Development

The information provided on this website is not official U.S. Government information and does not represent the views or positions of the U.S. Agency for International Development or the U.S. Government.