Side-chain bot activity discussion

There’s been some ongoing discussions in discord around bot activity on the side chain. While any bot activity violates the Rally Terms of Service, the community is shouldered with the responsibility to determine a course of action here. I think it is generally accepted that bots are active on the side-chain, and feel free to post evidence here if anyone remains skeptical.

  1. What actions are bots taking on the side chain?
    Buying: The automated buying of new coins is allowing the bot accounts to purchase new creator coins ahead of fans. The latest coin launch of $UBR coin saw maybe 40 purchases in the first minute of launch. How many of those purchases were from suspect accounts?
    Selling: Automated selling up to the flow control limits is readily apparent with each new coin launch. This, to the point where the selling on some coins was happening up to the minute the site was down for recent backend work, and then the selling picked right back up as soon as the site was live.

  2. Which actions are the most detrimental (highest priority to go after)? While both are a bad look, I think implementing protections to limit the bots lining up for creator coin launches will limit the later selling behavior even if human “token jumpers” take the bot place to some extent, it will at least provide true fans a better chance to compete.

  3. What measures are can we take to mitigate bot activity and disincentivize further bot activity. Someone more tech savvy should jump in here. I would think a captcha somewhere in the process might help here. Extra steps in the purchase flow have their downside, but perhaps we could implement this only for Rally conversions vs. credit card purchases and only on day of launch.

We can certainly go further and take action against those accounts. A warning that an account appears to be violating the terms of service, or an account freeze would seem like reasonable places to start.

  1. What can we reasonably ask of the core team to implement? I think it would be incredibly valuable for the core team to weigh in on this issue - what they’ve seen with their back end data, and what measures they think would be effective here. Also, why they might see this as a priority or not. It’s likely the team has already taken action against some accounts for issues relating to fraud and bot activity, but I don’t think there’ve been updates to the community on this. And to be fair, requests from the community for more insight on this issue.

The creator council has discussed this issue, and some have expressed an uneasiness in allowing creators to launch in an environment that seem to have a proliferation of bots that are being allowed first access to creator economy. Delphi is also looking at this issue as part of their mandate with the recently approved proposal.

Please add on to this discussion, as I’m very curious what the community would like to do here, and how we can use guidance from the core team on the options available to us to then move this towards a proposal to strengthen the network.



The notion of levelling the playing field is a good approach and avoids the discussions around point of view and motivation.
Idea 1: A Captcha on the Convert to/from screen, or, even better, 2FA on “Convert to RLY” may help quite a bit

Idea 2: Enforce a reasonable wait time between transactions on converting to RLY

Normal use case wouldn’t make these aggravating to humans and would go a good step forward on preventing bots from removing intended value brought in by supporters.

In terms of finding and enforcing, it will be said that new accounts will be created when one is banned.
An account which tries to engage in bot-like behavior should receive a warning. If they cannot provide evidence or recreate the suspected behavior through human means, enforcement.
The User Agent can be examined as well as IP addresses.
Yes, scripts can get around these but the dumb ones will be caught in the net.


Overall I believe that the ‘Fair Launch’ system will solve about 95% of issues relating to bots when it is implemented. The problem of ‘40 purchases in the first minute’ will be eliminated since this behavior will be much less profitable after the fair launch. The issue of lots of micro-transactions selling new creator coins should also be mostly eliminated since the people doing these sell offs probably wouldn’t have bought the coins in the first place after the fair launch.

I think some of the other ideas such as Captchas and 2FA will help as well so I’m all for this. However, overall it seems like the best idea would be to wait until after the ‘Fair Launch’ is implemented then re-evaluate to see if bots or bad actors are still causing problems.


I agree that the “fair Launch” system will be very helpful, however it will not be available for 99% of coin launches because of the current TBC design. It will only be made available for creators with a proven, and very large following. So, I think there is an imperative to implement these other measures as soon as possible while evaluating further ways to protect the integrity of creator coin launches.

Here is the specific part of the TOS being violated:

  • To use any spambot, bot net or other bot, scraper or other automated means to access, collect data, damage, disrupt or interfere with uses of our Site or systems, or transmit any virus, worm, Trojan or other malware or spyware to or through the Site;
  1. Bots are also scraping the coin list. So they Are Buying/Selling, and collecting data to interfere with the uses of the site, which is a means of exchange not a speculative instrument.

  2. Buying: New coin launches should not appear in the coin list or in the convert to/from on Trade for the 1st 24 hours and should only be accessible from the Creator page for purchase and not sale for perhaps the 1st day and should only be available through a link. You wouldn’t even need to do this if you implemented CC gating on day one. Buying should be limited on day one to whoever the creator sends some initial coins to, so Creators would control their launch through the distribution of “seed” tokens they would pass out, basically the Creator Coin would be used on day 1 to enable access to even buy it. This could be done with Creator Airdrops which would actually also drive engagement and hype around launch, getting airdropped by the right creator would make all the difference and adds value to Creator Coins on day one as well. This would not completely remove people’s ability to use bots to buy day one but would require them sharing their launch day, coin, and precise link. This would allow those who were legitimately trying to buy for the right reasons access as well.

  3. Selling: A transactional cool down and/or increased exit tax from CC to Rally on transactions that fall outside of normal usage and which are being done in a manner to circumvent flow controls which is in itself another violation of TOS. This tax and timer would cooldown based on normal usage and stack if spammed. Bot or otherwise anyone who is doing bottish behavior is doing it for the wrong reasons and abusing the system which is a violation of the TOS.

  4. 2FA: Should also be used on converts to/from Creator Coins. These are real assets and this provides an additional layer of security for transactions. This also does not restrict the free movement of CC within their own economies/ecosystems which is what it’s designed for.

  5. The actions that are the most detrimental to the broader community are those that are being taken with advantages that a common user does not have. This divides the community into fragments of those in the “know” with the access to the tools and resources, and those without. This goes back to Davey’s point about a level playing field. If you take a community that might already be skeptical about the crypto space in general and then you add bot activity, it is very image adverse and could also create broader issues going forward. The selling actions are the highest priority to go after because they are what impact Rally and the community in a negative way. It should be frictionless coming in. But when you are impacting the ecosystem in a negative way there should be safeguards and circuit breakers. Only bad actors will complain. Those that are here to extract value for themselves at the cost of Rally and our communities as a whole. They are basically parasites and leeches on the system. The need to be dealt with before it becomes an even larger issue and impacts overall PR.

  6. As Davey posted, in my honest opinion 2FA on converts to/from Creator Coins could stop a lot on its own, while also providing an additional level of security to enhance something based in security in which it is a cryptocurrency to begin with. Additional security is not a bad thing for transactions concerning real assets. 1st suspicious activity warning, 2nd suspension freeze until some kind of follow up, 3rd they should be banned. Gating launches with creators passing out their coin to allow purchase would also basically eliminate bots when combined together with 2FA.

Sidenote: Based on my observation, whoever is committing these actions, is now selling/giving away the ability to do so, because it is spreading and occurring on a wider scale as noticeable by transactional history.


Some ideas regarding the ‘Fair Launch’ topic.

1.) Dutch auction so nobody that gets in early has an advantage over other early adopters. I really like this idea but this would be a major change in the tokenomics.

2.) A heavy fee that decrease over time. This should encourage people to hodl longer and the fee could get distributed to coin holder.

The bottom line here is that the TOS is being abused. Since this is a community led effort, we ourselves have to police and administer the rules. The issue has been identified and I believe there are some particular characters associated with the nefarious buys and sells in my discussions with several creators.

I would favor these initial steps [these are suggestions I want input on these]

  1. Consult Legal and Delphi for advice
  2. USE CAPTCHAS and 2FA…
  3. Creator Council would be the central stop for all complaints on suspicious activity.
  4. First violation: a warning issued to the abusers.
    Second time: you will be asked to politely leave.
    Third time: I would not want it to go here but is a possible confiscation of the coins on the side chain and returned to the creator.
  5. Action could only be taken if there is 2 or more creators who can document the activity and the bad players.
1 Like

In most other TOS situations, you don’t get 3 strikes. You might get an opportunity to explain why your actions weren’t in violation in an appeal process, but 1 warning is generous. It would be understandable, considering a shift in behavior identification.

I am trying to take the high road but if it continues the 2nd and 3rd time could take place with frightening speed. I am a believer in second chances.

1 Like
  1. I actually posted this in Discord that perhaps the community needed to set aside a treasury to retain legal services to enforce TOS (independent of Rally leadership, since it is a community TOS and would need to be community-controlled enforcement). Otherwise the TOS is an empty document that lacks teeth, and lack of enforcement of one rule encourages people to break all of them. One of the primary legitimate reasons given in Discord for not being able to enforce the TOS is who enforces them, so I believe this is the central question as to which mechanism will be used to enforce TOS in general. Delphi could also be consulted on broader solutions for addressing bad actors and behavior through infrastructure mechanics and safeguards.

  2. 2FA I think is the better solution as Captcha will just lower quality of life while neither eliminating bots nor enhancing security. 2FA is also ideal like I said because these are financial assets at end of the day and anyone who complains about having additional security on their assets is probably a bad actor and doesn’t really have a strong logical counter-argument.

  3. Creator Council is a good place to start, while the community if it wished created a specific team based on a vote of who they would want to compromise the team that oversaw complaints, suspicious activity, fraud, etc. Call it an “Enforcement Team” which could or could not be the same team as Creator Council. I think it should be a separate team long-term as CC will just get overwhelmed as this will add a whole new layer of work to an already busy team.

  4. We agree on three strikes, this allows for use cases and special circumstances that are not abuse and also allows recourse for honest action, mistakes, or even let’s say emergency actions to liquidate and such. It would only be repeat offenders impacted which provides flexibility for the real world.

  5. Action could be taken based on verifiable activity, even if 1 Creator, ie let us say someone buying through an API BEFORE the launch time because API is easily verifiable it should not take more than one case to trigger action/warning. This goes for any type of bad actor activity which is able to be clearly documented which falls into the said basket. This threshold would diminish as more Creators provided evidence and broader the evidence, but something blatant shouldn’t need more than one Creator to address.

CAPTCHA is actually better than 2FA; CAPTCHA at least requires new models be trained every time Google upgrades their CAPTCHA every few years. 2FA can be handled easily with pre-built APIs specifically built to make it easier. Neither will slow down a bot at all from a technology standpoint; it’ll just add extra steps to legitimate users.

1 Like

Are you saying there is no metric or datapoint which can be used to indentify a bot from a human?

Even without using either, people who are violating TOS should be addressed. 2FA isnt about stopping bots altogether as much as enhancing security PLUS also impacting bots and increasing the resources needed to use one.

Outside of that, if a bot is indentified how should it be addressed? What is your honest opinion as a developer as to the best solution the make the bot experience as least efficient as possible?

Having bot detection and enforcement would address the issues while a firm long term solution was created.

Type of service that could address our issue:

1 Like

Fair launch will be available and recommended for all Creators and all TBCs.


Thanks, Kevin! Great to learn this and absolutely essential! This will totally solve this problem when made available. Look forward to seeing more on this!

I’d like to wait a bit longer for more input from the community on this thread. Any of these measures may prove surprisingly effective or perhaps ineffective at preventing some front-running bots on the creator coin listings - but at a minimum we can feel confident that we are doing even a single thing to prevent this activity while we work to provide fair launch access to all.

I’d suggest we put it to a “temperature check” vote, and see if the community wants to test any of the proposed measures. Can follow up with a community ambassador towards end of the week if the remaining comments are mostly towards exploring this further.

1 Like

Is the core team already using some kind of advanced monitoring system? This would be interesting to know. There are a lot of tools out there that do anomaly detection on traffic for example Datadog or Prometheus. Also a interesting approach would be to reward people that find technical exploits and report them.

There are multiple arguments in this thread; I will attempt to respond to them separately:

Practical Responses:

  1. CAPTCHA: In 2004 NIST advised people to use passwords with capitalization, special characters, and numbers. In 2004 this was good advice. In 2017, NIST advised that this was no longer the case; no one listened, because “everyone knew” that password should have capitalization, special characters, and numbers. The problem is that this advice is very easy for a computer to solve, and very taxing to a human. Likewise, CAPTCHA was invented in 1997; it was great at the time because it was very easy for a human (usually) and very taxing to a computer. The problem is humans have stayed the same, but computers have gotten WAY better; CAPTCHAs are now trivial for computers to solve, and have become more difficult for humans. I have two points here: 1) just because something is an industry standard practice doesn’t mean it is still viable (this is an argument against unstated premises in this discussion), and 2) sure it only takes a few minutes to set up CAPTCHA, but it also only takes a few minutes to set up a CAPTCHA solution (just import the right library and add a few lines of code); however it’ll irritate the hell out of legitimate users.

  2. 2FA: 2FA for logging in makes perfect sense; it makes it significantly more difficult for a third party to pretend to be either of the first two parties. It has no place in this discussion though, as the bots are not doing that. Putting 2FA on transactions is essentially going to be like “hey bot, I see you’re making this transaction, are you cool with that?” and the bot will respond “yep”. Again, all it will do is annoy legitimate users.

  3. Third Party Anti-Bot software: I’ve seen several of these, and all the ones I’ve come across so far have been akin to snake-oil salesmen. The problem is that to the host servers, everyone appears to be a bot. Bots, for the most part, have become good enough that software is rarely capable of telling them apart. And the interesting thing is that the only way to make bot detection better (GAN AI) also makes the bots better. The only thing I’ve found to be effective at stopping bots is for some kind of AI detection to flag any behavior it thinks is an anomoly (which is going to be probably 80% real people and 20% actual bots (though it will miss plenty of other bots)), and then have each one reviewed by people. And I don’t think this will work for much longer (though I’m talking years so it might be worth while in the short term).

Philosophical Responses:

  1. I’m not sure what the actual problem we’re trying to fix is here. If bots bought the coin and held the coin indefinitely, no one would care. So it seems to me that the problem is not actually the bots, but the fact that they are able to buy before the fans. But are they? So far as I can tell, transactions are processed first in first out; if you know the time a coin is going to launch, you theoretically could buy it BEFORE a bot. Bots aren’t talking with creators about when their coin is going to launch; they find out after a ping to the coin list. These pings aren’t instantaneous nor are they being constantly repeated (the server would crash if so). So there is time, there, where humans will know that a coin has been launched and bots won’t. For instance, either the first or second BTX purchase was 20,000 RLY and everyone was pissed because they thought it might be a bot. But I know for a fact it was a person, because the person told me it was them, and I double checked their wallet because I thought they were boasting.

  2. I don’t think bots are the enemy. That’s essentially the point of APIs; so that bots can handle all of the grunt work while humans do all of the thinking. If anything, I think there should be more bots and more access for bots. Imagine a situation where someone created a system that notified all users that a new coin had just launched, and allowed you to pre-setup and set-aside a set amount of RLY to invest in any new coins, and auto-bought it for you. That is equally as fair of a system as no bots. So if the problem is that “it is not fair”, it becomes clear that if the problem can be solved by removing bots or by adding bots, then bots are not the root of the issue.

  3. If the problem is not fundamentally bots, but rather this is actually just a reframing of the old complaint that a bunch of users buying the coin at launch and then selling as fast as flow control allows looks bad, then yes. I agree. It looks bad. But as I’ve mentioned in several places, I think it’s an inconvenience and has already been mostly solved. So I won’t rehash those same arguments here.

  4. If the problem is that TOS is not being enforced, then that is a valid issue and I would be fine with opening a new topic about that, but that seems to be a pretty ancillary concern to most arguments here; it seems like an excuse to ban the bots rather than the actual root issue.

  5. Finally, if the problem is “people are using the platform to enrich themselves instead of enrich the creators”, then I don’t especially care. All people are people; you’re going to have a tough time convincing me that one subset should be allowed to profit while another subset is not.

1 Like

I appreciate the time you took to put your thoughts together here. However, the hearsay you provide about one person, one launch, one time is simply inadequate to inform this discussion. And furthermore, the point is that folks aren’t supposed to know when a coin launches, but bots provide this for a subset of users.
You don’t see a problem, you don’t like the solutions others have proposed. Please how can we work together towards a better network?

How many of the same accounts are buying in the first minute of coin listings, what is their place in line, and for how many listings, and are these the same accounts that are automating their selling? Do you see ample evidence of bots or other violations of ToS. @DaddyFatSax

@ira Can we freeze bot accounts where ample evidence exists? What are the downsides to taking action and how should weigh those against the cost of inaction here?

I think the answers to these questions are essential to a productive discussion on this issue. I want to take action to protect new creators. I want to test ways to protect the integrity of launches. Howe can we get the Core Rally more dev support to evaluate the problem and propose solutions if that’s what’s needed?

The problem is straight up front-running of coin listings. Bots are enabling individuals to front-run coin listings. 70 purchases (through RLY converts) in the first 2 minutes of an unannounced coin listing is the evidence I would put forward that bots are enabling this to some extent. Automated selling is further evidence that bots are being employed in some capacity.
It seems self-evident that some folks are using technology in a capacity that is not readily available to others in order to game the system. Let’s do something about it.

Fair launch goes directly after the root issue of front-running coin listings - that’s why everyone who is interested in the health of the network wants it. That’s why it is such good news that it will be made available to all creators.

In the interim, should we sit on our hands and wish new creators good luck? Should we try some measures to level the launch playing field? I’m still strongly for trying something here by targeting that which is enabling the root problem - the bots. Suspending bot accounts from trading for 1 month or two months for violating TOS. Trying captcha, since some developers in the community think it is helpful and others disagree. I don’t care so much what we do, just that we do something. It’s exhausting here debating, when a simple snapshot could help us move this discussion past the nay-saying and towards some action.

This raises another problem. Our ability to govern this community is still heavily gated through the core team since we cannot put forward even a simple temperature check without them.

I would propose we further empower the creator council to take the community discussions and put them forward to snapshot votes for “temperature checks” at a minimum. Otherwise, we’ll sit here and put forward our flimsy evidence and guesswork, while third parties come in and further muddy the waters with ridiculous, unrelated assertions around reward manipulation by Core Rally.

Frankly, I haven’t seen a single good reason not to freeze accounts employing bots blatantly in violation of the terms of service. I don’t care if they start new accounts. Doing nothing is the absolute worst course of action to me. Please, make it harder for those exploiting the system. Lock up all their rally for 3 months, and for anyone else that uses bots. That’s a strong disincentive and goes some some way to buying time towards fair launch.


The question presupposes that I see a problem. i.e. “How can we work together to fix something that isn’t broken?” is nonsensical. That said, I only had practical concerns about the CAPTCHA and 2FA (specifically being implemented during transaction processes). I think they are a waste of time. But it is not my time being wasted, so I’m cool with having them implemented if everyone else wants it badly enough. It would probably buy a week or two of time while the bot developers notice the failure, implement the fix, and publish the results. I know there is a lot of creators who have been approved and haven’t launched, so if we launched a ton at once during this window it should be relatively bot free. Of course, it could also appear bot-free due to predator satiation; not sure how you’d tell the difference. Or if the difference matters.

I still don’t see why everyone seems to take issue with the “release the coin, wait 2-4 weeks and then announce the coin” solution. That pragmatically solves any of the actual issues with a coin launch. Bots/whales/whatever-we’ve-chosen-to-call-them-in-this-new-version-of-an-old-outrage do not hurt the creator unless the creator’s community is joining at the peak of the spike. If we prevent them from joining at the peak, then what the coin does in the first few minutes should be irrelevant.

As for the stuff about snapshots and governance, that should probably be it’s own forum thread; it will just muddy the discussion here. But my thoughts on that matter are that anyone who holds RLY should be able to do a snapshot proposal. And it should cost some small amount of RLY to do so (spam prevention). And that there should be a minimum threshold of RLY for a vote to pass. It seems pretty suspect to me that virtually every proposal has passed. It could be the case that by the time a proposal is making it to snapshot that it’s already been so well discussed that no one votes no. Or it could be the case that no one’s voting except for the people who care about that particular proposal (in which case silence shouldn’t count as consent).

1 Like

Yep, it kind of is a bad thing.

1 Like

Here’s what it looks like when a human does it.

I would say this is a problem with the non-flat tax rate. Not a bot issue.

As a more practical solution to bots, they could just lower the rate limit. We’d just need someone to look at the data and figure out a reasonable number, and then a way for legitimate bots (e.g. developer accounts that are doing transactions involving multiple users) to get permission to exceed that number.

1 Like