We're considering changes to the Quarterly Cup and want to hear from you, take our short survey to share your thoughts and help shape future iterations of the Cup.
Below are Frequently Asked Questions (and answers!) about medals. The general FAQ is here, and the medals FAQ is here.
Medals reward Metaculus users for excellence in forecasting accuracy, insightful comment writing, and engaging question writing.
Medals are awarded based on a user's placement in the Leaderboards. There are separate leaderboards for each medal category (Peer Accuracy, Baseline Accuracy, Comments, and Question Writing), and each leaderboard is further separated into time periods. Medals are also awarded for placement in each Tournament's leaderboard.
A medal's tier (gold, silver or bronze) is based on a user's rank compared to other users, with only the top 1% earning Gold medals.
So no one gets an unfair advantage, only public content (questions, comments, tournaments) counts for medals. If you are invited to a private tournament, your activity there will not count toward any medal. We have also decided the three Beginner Tournaments (1, 2, 3) would not award medals, since that would be unfair to veteran forecasters who were actively discouraged from participating.
Medals appear in the Leaderboards and in user profiles.
The Baseline Accuracy medals reward accurate predictions on many questions.
Users are ranked by the sum of their Baseline scores over all questions in the Time Period.
The Peer Accuracy medals reward accurate predictions compared to others, and do not require forecasting a large number of questions.
Forecasters are ranked by the sum of their Peer scores, divided by the sum of their Coverages over all questions in the Time Period. This creates a weighted average, where each prediction is counted proportionally to how long it was standing.
If the forecaster has a total coverage below 30 in a particular time period (e.g. they predicted 20 questions with 100% coverage, or 50 questions with 50% coverage), then their coverage is treated as 30. This makes it unlikely that a user wins a medal by getting lucky on a single question.
Before 2024, the Peer accuracy was slightly different. The forecaster score was the average of their Peer scores, not taking Coverage into account. This caused some incentives problems, see here for details. The initial handicap was also 40 instead of the current 30.
Tournament medals are awarded based on a user's rank on a tournament leaderboard. The top 1% get gold, the next 1% silver, and following 3% bronze.
The three Beginner Tournaments (1, 2, 3) will not award medals, since that would be unfair to veteran forecasters who were actively discouraged from participating.
A Comments medal is awarded for writing valuable comments, with a balance between quantity and quality.
Users are ranked by the h-index of upvotes on their comments made during the Time Period.
The Question Writing medals reward writing engaging questions, with a balance between quantity and quality.
Users are ranked by the h-index of the number of forecasters who predicted on their authored questions in the Time Period. Because there are few questions but many forecasters, the number of forecasters is divided by 10 before being used in the h-index.
All co-authors on a question receive full credit, i.e. they are treated the same as if they had authored the question alone.
Additionally, a single question may contribute to medals over many years, not just the year it was written. If a question receives predictions from 200 unique forecasters every year, then the author receives credit for those 200 forecasters every year.
Comments and Question writing medals are awarded annually, based on the # of upvotes on your comments and # of forecasters on your questions in a given calendar year.
For example, if you wrote 20 long-term questions in 2016 that each attracted 200 forecasters in every calendar year then your score for Question Writing would be 20 in every year after 2016. Even though you didn't write any questions in 2017, the engagement that your questions attracted in 2017 makes you eligible for a 2017 medal. Said another way, a great long-term question can contribute to many medals.
Time Periods for Accuracy medals serve two main purposes. They ensure a periodic fair starting line on January 1, at which point long-time and new forecasters are on equal grounds. They also group questions with similar durations together, so it is easier to separate long-term and short-term forecasting skill.
A Time Period for the Baseline and Peer medals consists of a Duration (1, 2, 5, 10… years), a start year and an end year. The end date for a time period is December 31 of the end year. The start date for a time period is January 1 of the start year. So, a 5 year medal covering 2016–2020 has a start date of Jan 1, 2016 and an end date of Dec 31, 2020.
The Time Period determines which questions are included in a medal calculation:
Following the rules above, almost all questions are assigned to their Time Period before forecasting begins. On rare occasions, a question will fail to be resolved before the end of the 100-day buffer: by the rules above it is automatically assigned to the next higher Duration in which it fits.
Note: If a question closes early it remains in its originally assigned time period. This is important to ensure that an optimistic forecaster does not gain an advantage. For example, imagine ten 5-year questions that can either resolve Yes this week (with 50% probability), or resolve No after the full 5 years. Starting next week, an optimist looks misleadingly good: they predicted 99% on all of the 5 questions that resolved, and they all resolved Yes. After the full 5 years they correctly look very bad: of the 10 questions they predicted 99% on, only 5 resolved Yes. Keeping questions in their initial time period ensures that optimists don't get undue early credit.
An h-index is a metric commonly used in academia to measure the quantity and quality of researcher publications. If a researcher has an h-index of N it means that they have published at least N papers that each individually have at least N citations.
We use h-indexes for the Comments medals (number of upvotes per comment) and for the Question Writing medals (tens of forecasters per question).
Traditional h-indexes are integers. To break ties, we use a fractional h-index, described below.
The fractional h-index is like the standard h-index, but with an added fractional part that measures progress toward the next higher h-index value.
Imagine that you have exactly 2 comments with exactly 2 upvotes. Your h-index is therefore 2. To reach an h-index of 3, you need to receive 1 more upvote on each of your 2 existing comments (a total of 2 more upvotes) and you need to write a new comment that receives 3 upvotes. In total you need 5 more upvotes to reach an h-index of 3.
Imagine one of your comments receives 1 new upvote, and you write a comment that receives 2 new upvotes. Your fractional h-index is then:
2 + (1 + 2) / 5 = 2.6
In general, the formula is:
Where is your integer h-index, and is the number of upvotes on your i-th most upvoted comment.
Medals are awarded based on a user's rank within a Category for a Time Period, or in a Tournament:
The denominators for the percentages are the number of users who have made any contribution toward that medal. Specifically, the denominators are:
To make the leaderboards more interesting and fair, we also enforce the following rules:
In general, yes! We designed the medal system so that once a medal is awarded, it never goes away.
However, when we discover an error - an incorrectly resolved question or a bug in the code - we plan to correct the error and medals could shift, hopefully only very slightly. We believe this will be a rare occurrence, but it may happen. The spirit of Metaculus is to be accurate.