The prediction market primitive

AdvancedMay 06, 2024
The prediction market is witnessing new developments, with AI emerging as a key and core component. It can address disputes, provide targeted event recommendations, and manage liquidity, offering comprehensive predictions for the prediction market. AI can also reduce risks and enhance price stability through LMSR AMM models and reinforcement learning agents.
The prediction market primitive

tl’dr

  • People have been predicting prediction markets to take off sooner or later and ongoing UX improvements have primed this segment for takeoff
  • But to scale to billions of users, we need “something new” beyond ongoing UX improvements, and that is AIs as the cog and the linchpin in the machine
  • An AI quartet of content creators, event recommenders, liquidity allocator, and information aggregators can catalyze massive new activity in this space
  • Integrating these AIs into the current prediction market framework can enable prediction markets at a microscopic scale, making them personally attractive and relevant
  • The prediction market primitive paves the way for Tinder-like prediction market apps, embedding predictive trading experiences into our everyday digital existence

Every decision starts with a prediction. Consider pondering over Bitcoin’s potential: “Will purchasing Bitcoin now yield a doubled investment by year’s end? If the “yes” prospect is judged even marginally more likely than “no,” it would be economically rational to decide to buy Bitcoin in the absence of superior alternatives.

But why stop at Bitcoin? Imagine we could architect markets rooted in predictions around all kinds of events like who will be the next US president or which country will win the World Cup. Here, not assets, but forecasts themselves are traded.

Predictions shape markets, markets validate our predictions

Prediction markets have been called the “holy grail of epistemics technology” by Vitalik.

Vitalik has a knack for seeing big things before others. So he’s a good source for frontrunning narratives. He proposed the idea of an AMM on Ethereum seven years ago in a blog post. “Another guy” named Hayden Adams took the call to action and started building it, on a $60K grant. Two years later, Uniswap was born.

If Vitalik’s blog posts can initiate the creation of $100+ billion dollar industries, we should probably pay attention to them. For example, it happens that Vitalik was excited about using prediction markets in governance back in 2014 — a radical form of governance known as “futarchy” — and now we have Meta DAO doing just that, with big VC firms like Pantera taking part in it.

But it’s his more recent discussions around prediction markets + AI that we want to focus on, as we are starting to see the beginnings of something big here.

Prediction markets are primed to take flight

The market leading prediction market right now is Polymarket, owing to its ongoing UX improvements and expansion of event categories and event offerings.

Data source: Dune

Monthly volume recently hit all-time highs and is likely to go higher with the US presidential election in November this year (Polymarket activity is US-centric).

There is further precedent to believe that prediction markets could take off this year. Besides crypto markets reaching all-time highs in 2024, we also have one of the biggest election years in history this year. Eight of the world’s ten most populous nations, including the US, India, Russia, Mexico, Brazil, Bangladesh, Indonesia, and Pakistan are also going to the polls. We also have the 2024 Summer Olympics upcoming in Paris.

But given monthly volumes are still in the tens of millions when it could reach hundreds of millions, let’s consider some of the limitations of current prediction markets:

  • Centralized control over event creation
  • A lack of incentives for community content creators
  • Insufficient personalization
  • Predominantly US-centric; overlooking substantial international opportunities

But we need “something new”

We believe that thing is AI.

We need AIs as players in the game. We expect that soon it will be common to see AIs (bots) participating alongside human agents in prediction markets. We can already see live demos of this in Omen and PredX, among likely many others to enter this scene. More on this later.

AIs need AIs as arbiters of the game. Although relatively rare, there can be instances where dispute resolution is important and necessary in a prediction market. For example, in a presidential election, the results may be very close and allegations of voting irregularities may surface. So, while the prediction market may close favoring Candidate A, the official electoral commission may declare Candidate B as the winner. Those betting on Candidate A will argue against the outcome due to alleged voting irregularities while those betting on Candidate B will argue that the electoral commission decision reflects the “true” outcome. A lot of money may be on the line. Who’s right?

Answering this question poses several challenges:

  • Players may not trust human arbiters due to their biases
  • Human arbitration can be slow and expensive
  • DAO-based prediction resolutions are vulnerable to Sybil attacks

To address this, prediction markets can use multi-round dispute systems a la Kleros except using AIs instead of humans to solve disputes at earlier rounds and only humans are involved in rare cases where disputes reach a deadlock. Players can trust AIs to be impartial, as fabricating enough training data to bias them is unfeasible. Also, AI arbiters work faster and at much lower cost. xMarkets is building in this direction.

AIs create desire

For prediction markets to really take off, they need to be able to engage sufficient interest to push people past the psychological threshold of actually trading prediction assets. It may not take much to do this for general topics a lot of people care about like who will win a presidential election or the Superbowl. But, including only general topics severely limits potential liquidity. Ideally, a prediction market could tap into liquidity of specific events of high interest to niche audiences. This is how targeted advertising works, and we all know targeted advertising works.

To achieve this, prediction markets need to solve four general challenges:

  1. Event Supply: Highly relevant event supply is key. To grab the attention of a niche yet dedicated audience, event creators must deeply understand their community’s interests to drive participation and volumes.
  2. Event Demand: Demand needs to be high within the particular targeted community, taking into account their demographic and psychographic idiosyncrasies.
  3. Event Liquidity: There’s enough opinion diversity and dynamics within the targeted community to drive sufficient liquidity to retain both parties and minimize slippage.
  4. Information Aggregation: Players should have easy access to enough information to make them confident to make a bet. This could include a background analysis, relevant historical data, and expert opinions.

Now, let’s see how AI could address each of these challenges:

  1. Content Creator AIs: Content creator AIs (“copilots”) assist in the creation of content beyond human capacities or motivation. AIs suggest timely and relevant event topics by analyzing trends from news, social media, and financial data. Content creators — whether human or AI — will be rewarded for generating engaging content that keeps their communities lively. Community feedback enhances the AIs understanding of its communities, making it an iteratively improving content creation engine to bond content creators and their audiences.
  2. Event Recommendation AIs: Event recommender AIs tailor event suggestions to users based on their interests, trading history, and specific needs, focusing on recommending events ripe for debate and trading opportunities. It adapts to users’ behaviors across different regions, cultural contexts, and times. The end goal is a highly targeted feed of events, free from personally irrelevant content that clutters prediction market platforms today.
  3. Liquidity Allocator AIs: Liquidity allocator AIs tackle counterparty liquidity risk by optimizing liquidity injections to narrow the bid-ask spread. To minimize risk, AIs can implement the logarithmic market scoring rule (LMSR) AMM model specifically designed for minimizing risk in low-liquidity prediction markets. They could also incorporate reinforcement learning agents that dynamically adjust liquidity depth, protocol fees, and the bonding curve to further minimize risk. These AIs manage event liquidity from a general LP pool, rewarding contributions with accrued fee revenue or platform tokens as further incentive. All in all, this means preemptive adaptation to market changes, reduced slippage, and better price stability.
  4. Information Aggregation AIs: These AIs harness compute over a wide array of indicators (e.g., on-chain data, historical data, news, sentiment indicators) for players to comprehensively understand the event. From there, the information aggregation AIs can offer well-rounded projections, turning prediction markets into the go-to source for informed decision making and alpha. Projects can choose to token-gate access to the insights gleaned by information aggregation AIs, because in prediction markets, knowledge = money.

Now, let’s see what this looks like when you piece it together. Below, you can see the main components and workings of a prediction market without AIs (in black) and with AIs (in blue).

In the non-AI model, content creators (usually the platform itself) arbitrarily create events, supply liquidity (initially subsidized by their treasuries), save the events to an event database, and promote them in bulk to human players. This is how Polymarket currently works, and it’s working quite well.

But, I think it can get a lot better.

In the AI model, content creator copilot AIs support content creators in creating and promoting events inside targeted general or niche communities. Liquidity provision is supported by liquidity allocator AIs that optimize liquidity injections over time through learning player order books and using external data from oracles and other data vendors. Event recommendation AIs use stored events in the event database and wallet transaction history to optimize event recommendation tailored based on personal interests. Finally, information aggregation AIs collect information from data vendors to provide educational and contextual information to human players and to inform AI players on their prediction decisions. The end game? A fine-tuned prediction market system that enables prediction markets to work at a microscopic scale.

Prediction markets at this scale would enable a different user experience, one that is more like Tinder or TikTok. As the events are highly targeted, they could be fed to you in a feed a la TikTok and — even with today’s wallet and blockchain technology — players could place bets by swiping left or right a la Tinder. Imagine that. People making micro-bets on the events they personally care about while they’re commuting to work or school.

Supercharging information aggregation

Of the most notorious difficult outcomes to predict is asset prices, so let’s focus here to see how AIs perform when pushing at the edges of what is possible in prediction markets.

Using AI to predict asset prices is actively being explored in academic circles. Machine learning (ML) techniques like linear models, random forests, and support vector machines have been shown to predict cryptocurrency prices with better accuracy than human judges. These models have discovered that behavioral indicators like Google search intensity explain price variance.

IBM research explored artificial prediction markets for commodity price prediction, offering a compelling case study on integrating AI with prediction markets. Their research highlights the potential of artificial prediction markets to aggregate diverse and evolving real-time information sources for making better predictions even in complex real-world problems like predicting the prices of volatile commodities not traded on online exchanges (e.g., ethylene, hydrocarbons). The reason AI agents can outperform standard ML models here is that they learn over time, by themselves — aka agency.

Another study comparing random forest regression and LSTM to predict Bitcoin’s next-day price showed that the former performed better in terms of less prediction error. It also showcased the power of AI in information aggregation breadth — far beyond ordinary human capacity — to model 47 variables across eight categories including (a) Bitcoin price variables; (b) technical indicators of Bitcoin; © other token prices; (d) commodities; (e) market indices: (f) foreign exchange; (g) public attention); and (h) dummy variables of the week. The most important predictors varied over time from US stock market indices, oil price, and Ethereum price in 2015–2018 to Ethereum price and a Japanese stock market index in 2018–2022. It also found that for Bitcoin’s next-day price, the random forest regression performs best with a one-day lag.

Relationship between model error magnitude and lag

We can infer that in some popular prediction markets, there is simply too little time for a busy human to aggregate, analyze, and interpret sufficiently large amounts of data to make good predictions. Or, the problems are simply too complex. But AIs can do this.

AI token recommendation

Pond is building a decentralized foundational model of crypto, which has been applied in AI-generated token recommendations derived from on-chain behaviors. Currently, they’re large graph neural network (GNN) uses on-chain behavioral data to estimate alpha probabilities of various tokens. GNNs are a class of AI models designed specifically to process data represented as graphs, making them useful where data is interconnected with a relational structure such as the p2p transactional networks of blockchains. Dither is another token recommendation AI with a token-gated Telegram alert bot, that takes a time-series modeling approach to token recommendation.

Solving the thin markets problem

One of the main challenges facing prediction markets is that the markets are too thin to attract enough players and volume. But there is a major difference between prediction markets of the 2010s vs the 2020s, and that is the possibility of ubiquitous participation by AIs. As Vitalik point out:

To add, it’s possible to improve the automated market maker (AMM) models underlying prediction markets. For example, an analysis of over 2 million transactions on Polymarket identified problems with liquidity provisioning in converging prediction markets using the traditional constant product AMM (x*y=k), including:

  1. Convergence and liquidity removal. As prediction markets converge (i.e., as the outcome becomes more certain), LPs are incentivized to remove their liquidity. This is rational behavior because the risk of holding “losing” tokens increases. For example, in a market converging toward “yes,” the “no” tokens become less valuable (i.e., impermanent loss), posing a risk to LPs who might end up with worthless tokens if they don’t sell in advance.
  2. Bias and inaccuracy. This reduction in liquidity can lead to less accuracy and more bias as prediction markets converge. Specifically, in the volume-weighted price range of 0.2 to 0.8, ‘no’ tokens are often underpriced and ‘yes’ tokens are often overpriced.

Source: Kapp-Schwoerer (2023)

To address these issues, the authors propose a “smooth liquid market maker” (SLMM) model, and demonstrate that it can increase volumes and accuracy in converging prediction markets. It does this by introducing a concentration function into the model (a la Uniswap v3) in which LPs provide a liquidity position that is only active for specific price intervals. The result is reduced risk exposure, ensuring that the number of valuable tokens (e.g., ‘yes’ tokens in market converging to ‘yes’ outcome) held by LPs does not converge to zero as prices adjust, unlike in the constant product AMM.

LP-trader tradeoffs

There is a balancing act that must be reached when choosing a concentrated liquidity AMM variant like the SLMM for converging prediction markets. While you’re trying to reduce risk for LPs, you end up disincentivizing some trading activity.

Specifically, while concentrated liquidity can make it less likely that LPs lose out as the market converges on a sure outcome (thus reducing premature withdrawal), it may also reduce trading opportunities to profit on small price changes (e.g., like moving from $0.70 to $0.75) due to increased slippage, especially for large orders. The direct consequence is that traders’ potential profit margins are squeezed. For instance, if they expect a small price move from $0.70 to $0.75, the slippage may limit the capital they can effectively deplore to capture the expected upside. Looking forward, it will be important to trial various adjustments on the tradeoff term in these market maker formulas to find the sweet spot.

Conclusion

The prediction market primitive is a powerful one. Of course, like any other crypto primitive, it faces challenges but we are confident that they will be overcome. As they are gradually overcome, we can expect to see this primitive reused to answer all sorts of questions in a wide variety of digital contexts. With advancements in targeting and liquidity solutions, we can expect the development of niche prediction markets. For example, take X (formerly Twitter) users:

  • Will X introduce a Premium++ or equivalent by the end of the year?
  • Will X make the edit tweet feature available to all users by Q3?
  • Will X report an increase in daily active users in the next quarterly report?
  • Will X’s advertising revenue increase or decrease in the next quarter?
  • Will X announce new major partnerships with content creators by the end of year?
  • Will X release a blockchain or cryptocurrency-related feature by Q3?

Interestingly, these questions don’t need to stay confined to standalone prediction market websites. They could be integrated directly into X or other platforms via browser extensions. We may start to see micro-prediction markets pop up regularly in our everyday online experiences, enriching ordinary browsing with speculative trading opportunities.

I intentionally wrote some of the questions above and asked ChatGPT to write the others. Which did I write and which did the content creator AI write? If it’s hard to tell, that’s because ChatGPT’s content creator AI is already really good. So are the information aggregation AIs and recommendation engines built by other Big Tech (look at the ads Google and Instagram feed you). While matching the performance of these models will take work and time, they demonstrate the feasibility of these AI categories. The main open question lacking precedent is more in the direction of liquidity allocator AIs, AI players, and the development of self-improvement and goal-directedness in AIs — the evolution from basic machine learning to verifiable AI agents.

If you’re building in these spaces or this post resonates with you, do reach out!

Relevant Reading

Disclaimer:

  1. This article is reprinted from [Inception Capital], All copyrights belong to the original author [Hiroki Kotabe]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!
Create Account