Running a Successful Testnet: Tips & Tricks from the Trenches

Running a Successful Testnet: Tips & Tricks from the Trenches

We at Chainflow have operated in crypto testnets since first joining the Cosmos Validator working group in late 2017. Since then, we've operated on many testnets through the years, starting with the original incentivized testnet, Game of Stakes. We also helped design a number of these testnets in the early years, when the concept was new and teams were trying to determine the characteristics of an effective incentivized testnet. We've done this for projects like NuCypher, Akash and helped design Solana's Tour de Sol.

In this article, we'll share what we've learned about running successful testnets. While most of the feedback applies to incentivized testnets in general, it might be useful for networks who do not plan to incentivize.

Defining Goals & Building the Foundation

What are the goals of the testnet?

This is the first question teams should ask themselves when planning a testnet program. The goals often fall along the spectrum from assumed to loosely understood. We suggest a team start by defining the goals, write them down, agree with them as a team, then communicate the goals clearly to the validator participants.

Goal examples include things like identifying the validators that will make it into the active set and/or generating visibility for for the project. Regardless of the goal details, being synchronized as a team internally and with the validator set externally is critical to the successful outcome of a testnet program. Furthermore, goals can be weighted based on their importance and related priority.

Working Backwards from the Foundation, in Phases

Once the testnet goals are set, teams should get to the nitty-gritty work and identify the specific tasks to achieve these goals .

These tasks should be linked directly to accomplishing the foundational goals. It's often helpful to break these tasks into stages or phases. Points can be assigned based on weights given to each of the goals.

There are two ways to do this:

  1. Each stage or phase can have a specific set of goals. In this case tasks are discrete, finite and confined to a specific stage or phase.
  2. Goals span across multiple stages or phases. In this case tasks build upon themselves over stages or phases.

Starting with the goals and working backwards provides the team with confidence that what they're asking validators to do is directly linked to the testnets' desired goals.

Taking pauses between phases is also very beneficial, as it helps keep the team and validators aligned throughout the testnet program.  

Establishing Clear Expectations

There are two sets of expectations to establish when designing a testnet program.

  1. What does the team expect from validators?

Establish the level of commitment you expects to see here. Should validators be treating these testnets like they would a mainnet? How fast do you expect validators to respond to announcements? How should submissions be made? Can validators confirm receipt of submissions and if yes, how?

2 - What can validators expect from the team?

Will the team be sharing a scoring or points system? What is the preferred way to ask questions? What level of responsiveness can validators expect from the team? How many validator slots are to be competed for? What will happen to validators who don't make the cut, i.e. are there token awards for participants who don't make the mainnet set, etc.

It's a good practice to share the answers to these questions before each stage of the testnet.

Establishing a Consistent Level of Transparency

We've seen testnet programs operate under varying degrees of transparency. The most successful have maintained a consistent level of transparency throughout the program. The least successful have promised to be very transparent and ended up not being transparent at all. The middle scenario is when the level of transparency shifts throughout the testnet.

Many validators have been around for a while and have participated in many testnets. Their BS antennas are up and can usually pretty quickly identify when they're being led-on or otherwise gamed in service of the protocol. Some or even many may tolerate this to avoid losing out on potential rewards.

However, the establishment of goodwill goes a long way toward building a healthy network. How a testnet runs is often an indicator of a network's future success. Issues that go unresolved during testnets, be they technical or social, can do damage down the road and jeopardize the long-term success of the network.

Here's a sample framework for determining the level of transparency: establishing transparency related to the scoring system:

  1. How are scores calculated?
  2. How are submissions made?
  3. How can a validator confirm a submission has been received and considered?
  4. Will scores be shared publicly and/or privately (individually)?

In addition to being transparent, consistency is critical, as exceptions and deviations from it can undermine trust very fast. Validators are pretty adept at identifying these exceptions, deviations and preferential treatment.

Establishing Clear & Consistent Communication

Running a successful testnet program can't happen without consistent and clear communication with the participants. This point goes hand-in-hand with the transparency recommendation In the early days, we encouraged teams to use a single announcements channel. Fortunately this has become common practice today.

Choose a single way to communicate messages you want to be sure validators see critical updates. Don't switch between a Discord announcements channel and an @all role in a separate Discord channel for announcements.

Like most people in crypto, validators are in more chat channels than they can realistically follow 24x7 and respond to immediately. However, validators do have processes in place that allow them to monitor and respond quickly to a smaller number of channels.

Use a consistent messaging format when possible, e.g. for upgrade announcements include the code link, tag/hash, target upgrade time, any instructions that may be new or different from past upgrades, as well as the window validators are expected to respond within.

More generally, keep the level of communication consistent when at all possible. It becomes difficult when validators have a hard time understanding:

  1. How to contact a core team member and who to contact

What's the communication flow from the validator to core team look like? Are core team members open to receiving DMs? Should validators tag them? Should a moderator be contacted, who can then route the request to the right team member?

  1. What the expected response time from the core team member is

If core team members can't be expected to be available consistently, that's fine. Knowing this makes it clearer to validators that we should be getting and giving help among ourselves, rather waiting for a response from a core team member.

Conclusion

There's a lot that goes into planning and executing a successful testnet program. The level of effort is often underestimated. Planning at the beginning goes a long way toward setting a foundation, keeping all parties synchronized and ultimately achieving the testnet program's goals.


Have thoughts or feedback? Join Chainflow on Discord or follow us on Twitter/X!