Whether you're a newcomer or a seasoned Metaculus forecaster, this tournament has some unique features we want to highlight.
Practice forecasting with our warmup questions before the contest begins February 3rd. Warmup questions won't affect your contest ranking.
This contest consists of a single set of 50 forecasting questions, with two separate competitions and leaderboards that differentiate all competitors under the banner of the Open Competition from undergraduate competitors within the Undergraduate Competition.
Each of the two competitions and leaderboards features a $12,500 prize pool, for a total of $25,000. The top 125 forecasters within each competition and leaderboard are eligible to win prizes. Competing undergraduate forecasters are eligible to be ranked and receive prizes for their performance on both leaderboards. Prizes will be awarded after contest completion and following identity verification.
Unlike last competition, this year there is no $50 minimum prize awarded, which means the top performers are likely to receive a larger share of the prize pool and there may be less than 125 prizes paid out in each category.
The Undergraduate competition and leaderboard are open only to undergraduate students currently enrolled in colleges and universities. If you enroll in the Undergraduate Competition, you will automatically also be included in the Open Competition. This is your opportunity to stand out to the Bridgewater recruiting team and compete for a share of the $12,500 Undergraduate Prize Pool!
Alongside the Undergraduate Competition and Leaderboard, there is an Open Competition and Leaderboard in which anyone can compete, with an associated $12,500 Open Prize Pool. Experienced and new forecasters alike will have the chance to demonstrate their skills and become eligible for a potential meeting with the Bridgewater recruitment team.
New to forecasting? No problem. Here's how to get ahead:
Whether you're a beginner or looking to brush up your skills, here are some resources:
The foundation of Bridgewater is our mission to deeply understand how the world works and translate that understanding into unique market insights, aligning with Metaculus' mission—a platform for forecasting and modeling future events and trends. We hope this partnership reaches more people who see the power of using data and research to comprehend the world around us. Join now to showcase your skills and take part in this exciting tournament!
Don't hesitate to reach out to us at contact@metaculus.com. We read and respond to every email!
Examples: "Who will be Japan's next Prime Minister?", "Will NASA's Artemis 2 launch be successful?", …
To predict, share the probability you give the outcome as a number between 0.1% and 99.9%. On the question page, simply drag the prediction slider until it matches your probability and click "Predict". You can also use the arrows to refine your probability.
Multiple choice questions ask about more than two (Yes/No) possibilities. Predicting works the same, except your predictions should sum to 100%. After inputting probabilities, select auto-sum to guarantee they do.
The higher the probability you place on the correct outcome, the better (more positive) your score will be. Give the correct outcome a low probability and you'll receive a bad (negative) score. Under Metaculus scoring, you'll get the best score by predicting what you think the actual probability is, rather than trying to "game" the scoring.
Examples: "When will humans land on Mars?", "What will Germany's GDP growth be in 2025?", …
To predict, provide a distribution, representing how likely you think each outcome in a range is. On the question page, drag the slider to change the shape of your bell curve, and focus your prediction on values you think are likely.
If you want to distribute your prediction in more than one section of the range, you can add independent bell curves to build your distribution and assign a weight to each of them.
The higher your distribution is on the value that ultimately occurs, the better your score. The lower your distribution on the actual value, the worse your score. To get the best score, make your distribution reflect how likely each possible value actually is.
This year the tournament will use question weighting for question groups. We’re using weighting to reduce the effect of correlation on scores and leaderboard placement. There is likely to be correlation in some sets of similar questions, resulting in scores on those questions containing less signal than if the questions were uncorrelated. We want to assess forecasting skill, including on some sets of correlated questions, and weighting allows us to do that while reducing the impact of correlation on tournament placement.
Question groups are questions that have multiple questions within them. They’re different from multiple choice questions because the subquestions aren’t mutually exclusive. For this tournament, when we use a question group we’re going to weight each subquestion within it so that the total weight of the question group sums to 1.0.
For example, if we’re asking what the USD exchange rate will be for a number of currencies on a certain date, you’ll provide forecasts for each listed option. If there are three options, we’ll set the weight for each of them to 33%, so that the total weight sums to ~100%. That means the entire group will be weighted equivalently to one forecasting question. Above we’ve referred to there being 50 questions in the tournament, and we’re using that as shorthand to mean a total weight of 50. Some of those 50 will be question groups, where the group will be worth one question but be broken up into lower-weighted subquestions. You can see the question weight used for each subquestion under the three-dot menu by each subquestion, as shown in the image below.