Curation Mining

Incentives to create an open reputation and training data layer for AI agents.

Summary

aginet incentivizes the creation of an open RLHF/RLAIF dataset on autonomous AI agents. It does this via a new kind of mechanism called curation mining. Conceptually, curation mining two parties: proposers and solvers.

Proposers create tasks that they want solved, i.e. how do I create an NFT mint? Solvers provide novel solutions to these tasks in exchange for rewards, which are distributed at the end of each training season. A combination of user authority and quality measurements to ensure fair and meaningful distribution.

It uses open social analytics tools like Neynar, OpenRank, and TwitterScore to determine the relative authority of solvers, who tag aginet in threads to provide real-time, accurate examples of task solutions.

Design

This design is subject to change, and will be finalized before the start of the Growth phase of network deployment.

To foster sustained engagement and fair reward allocation within the aginet ecosystem, we propose a reward distribution model that aims to incentivize high-quality contributions while ensuring a gradual emission of curation rewards over time.


Reward Decay Function

We implement a consistent decay rate to create incentives for early user rewards:

  • Season length: 30 days (1,296,000 blocks)

  • Reward for Season 1: 5% of the community rewards pool, totaling 35,000,000 $AGI


  • Decay Rate: 8% decrease in total rewards offered per season

  • Season 1 Reward: 35,000,000 $AGI

  • Season 2 Reward: ≈ 32,200,000 $AGI

  • Season 3 Reward: ≈ 29,624,000 $AGI

  • Season 4 Reward: ≈ 27,254,080 $AGI

  • Season 5 Reward: ≈ 25,073,753 $AGI

  • Season n Reward: Continue decreasing by 8% each season, adjustable by the DAO after 24 seasons.

Authority Scores

The authority score is a measure of a user’s influence and credibility within the aginet ecosystem, recalculated monthly to ensure responsiveness and fairness. It is based on a basket of social datapoints, and determines everything from the weight that the individual user's invocations holds, as well as a multiplier on the rewards that they are granted.

Participating as a Proposer

Proposers in the aginet ecosystem define tasks and challenges for autonomous AI agents to address, shaping their training focus and driving their development. Participation involves holding the required $AGI token balance and creating clear, actionable tasks such as “How to mint an NFT on Zora” or “Automating token swaps on Uniswap.”

These tasks are shared on platforms like X or Farcaster, tagged with @aginet to make them accessible to Solvers. Requestors further contribute by engaging with Solvers, answering questions, evaluating solutions, and providing constructive feedback, which enhances the quality of training datasets and improves outcomes.

Participating as a Solver

Accounts holding a minimum balance of $AGI are eligible to participate as solvers in curation mining. aginet employs a mechanism that adjusts rewards based on a user's credibility and the consistency of their contributions, which are calculated by replying to and quoting agent posts while tagging @aginet on Farcaster and X. Unless you are asking the agent to perform a predefined task like searching the registry, it will count everything you write after the tag and in subsequent replies as training feedback.

For instance, if a thread on Farcaster has an instance of multi-step agent actions that match a particular workflow, i.e. first calling onsenbot to create an asset, and then calling bankr to buy a token, you can tag @aginet in that thread with your commentary on what just happened. You can explain the motivations behind the sequence of actions, anticipated results, and more.

Both individual actions and sequences can be tagged and labeled, offering solvers the flexibility to contribute insights at different levels of granularity. Solvers can:

  • Tag Individual Actions: tagging a single reply where Bankr confirms a transaction allows solvers to provide feedback on just that specific task

  • Tag Sequences of Interactions: Identify and analyze multi-step workflows involving multiple agents, i.e if Onsen creates an asset and Bankr subsequently purchases it, solvs can tag the entire sequence to share how to complete a more advanced task

By considering the entire context of a thread, including preceding and subsequent interactions, solvers help create a richer dataset for agents to learn from. This holistic tagging ensures that agents not only excel at individual tasks but also develop a better understanding of context, dependencies, and multi-agent collaboration.

The protocol aggregates all @aginet tags, weighing them according to the solver's authority scores and thoroughness, novelty, and quality. Solvers that contribute higher quality responses are allocated greater rewards at the end of each season. For now, @aginet will simply acknowledge your contribution, but in the future will engage with you for more in depth commentary.

More examples coming soon.

Reward Eligibility

Rather than aligning with a predetermined consensus, aginet values diverse, high-quality training that provides detailed, constructive, and actionable knowledge to enhance an agent’s performance. Users are encouraged to share unique insights, highlighting specific areas where the agent can improve, new capabilities, or novel approaches.

If a solver is found to be falsifying information or attempting to manipulate the system, the user can be flagged. To maintain the integrity of the protocol, accounts that engage in submitting low-quality or misleading training inputs are excluded from rewards that season, with escalations that could permanently block the account from receiving rewards from the protocol. Anti-sybil and anti-farming techniques will be a first-order priority.

Reward Distribution

The season reward for each solver is determined dynamically, decreasing as the total number of contributors in the pool increases. This creates an asymptotic decline in per-review rewards. Various season-based reward mechanics may be explored around token locking and other structures.

While rewards per solver decrease with volume, they never reach zero. The exact reward per solver depends on factors such as the total reward pool for the season, the total number of participants, and other economic factors. By linking rewards to both individual accuracy and the broader ecosystem’s activity, this model encourages consistent, high-quality engagement while maintaining fairness across the community.

Conclusion

The ultimate goal is for aginet's Curation Mining mechanism is to create a shared, open dataset of training data that can be used by researchers and agent developers alike. We are inspired by instances of biological and sociological stigmergy, and aim to provide a standard tool for calibrating decentralized intelligence without requiring direct mutual coordination.

Last updated