I worded that in a confusing way. What I meant to convey is that the issue in the analogy is the exact issue with the algorithm, as the number of concurrent edits scale up to large numbers, this is likely the cause of the first noticeable performance issues.
Is this really something that came up for you
Yes, there were a lot of use cases involving 20-40 active editors on a single document. These included document centric activities during remote meetings for brainstorming, planning, and various weekly rituals involving the entire engineering team or company. A typical session would involve a 5ish minute time-boxed period when everyone is contributing to the document at the same time. After, editing would stop and there would be read-outs and discussions.
Our team was globally distributed with members in Europe, USA, South America, and even South Korea. The latency variation between team members was quite high and WFH exacerbated this due to dodgy network conditions at homes, AirBnBs, coffee shops, tethering, and coworking spaces.
The network client was heavily instrumented through Sentry and FullStory so we were able to track unconfirmed steps, step confirmation latency, and etc. This data was reconciled with the expected behavior based on the algorithm “model”(which I feel is pretty sound). One of our team members in Europe would reliably have their edits rejected for minutes during activities haha.
These issues were one of the initial motivating factors in exploring Yjs(IIRC).
but I’m wondering if it is a common enough issue to complicate the protocol for
I think it’s just very use case specific. Most ProseMirror projects uhh… probably never end up with super heavy workloads like the above. For those that do or get to the scale of Atlassian or Zoho it may be worth it 
Since I wrote that plugin and have implemented backends based on it in NodeJS and Asp.Net, it doesn’t seem very complicated to me and it’s “easy” enough that I would just default to it now.