Horizontally Scaling the Central Authority

Has anybody tried to horizontally scale the central authority? By that I mean having more than one process running (potentially on different machines) and then always sending the requests to a given canvas doc to the right process and also handing what has to be done when one of the nodes goes down and things like that.

This seams to be a common NodeJS services scaling problem and I was hoping to find a proved library or tool that helps with that.

I’m just using a single PostgreSQL node as my authority (implements appropriate locking) and has notify/listen powering WebSocket broadcast of changes, technically I can horizontally scale the stateless Node.js servers that sit in front, but it’s all I/O bound at the database so there’s no need.

Isn’t there a limit as to how many people can effectively work on the same document simultaneously without it socially getting to be an organizational mess?

Before using ProseMirror, Fidus writer had at one time a complex redis pubsub setup whereby individual end users could be connected to different servers and still edit the same document together… when switching to ProseMirror we reevaluated that. Having that many authors in a single document likely never makes sense. What makes more sense when scaling up to many servers, is to filter by URL and have different servers handle different individual documents. So for example one server can handle all messages for document 4, 10, 15, 21 and 78 and another for document 5, 6, 7, 12, 14, 99 and 278.

1 Like

Yes, I also think there will always be at most one server per document and that a load balancer should sit in front of the servers and always send requests for a given document to the same server. My questions was more regarding how can this be implemented. From what I’ve been reading using HAProxy and making the stickyness based on URL instead of user session should work. Are you using a similar approach?

@FelipeTaiarol Using the URL to channel requests by load balancer sounds like what I read up on when removing redis. We have not actually had a need to use this so far.

@FelipeTaiarol what ID system do you use to address documents?

Do you mean how do I generate the ID? I just create a string with the current timestamp followed by 20 random characters. I figured out how to scale it using HAProxy and this configuration:

global
    daemon
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend http-in
    bind *:80
    default_backend servers


backend servers
    balance roundrobin
    stick-table type string size 200k expire 30m
    option httpchk GET /health
    http-check expect status 200
    stick on path,word(2,/)  if { path_beg /documents/ }
    server server1 127.0.0.1:8000 check inter 250
    server server2 127.0.0.1:8002 check inter 250


listen stats
    bind 127.0.0.1:9000
    mode            http
    log             global
    maxconn 10
    stats enable
    stats hide-version
    stats refresh 5s
    stats show-node
    stats auth admin:password
    stats uri  /haproxy?stats

The URL always has the document id and once HAProxy routes a given ID to a server it will always route requests to that same ID to the same server. It will also send a HTTP GET to /health to each server at every 250ms and if one of them is down it will clean those entries from the stick-table.

stick-table type string size 200k expire 30m

Does this mean that the affinity only lasts for 30m? What happens after that time, or if the backend goes down?

Yes, it will keep the entries on the table for 30m. If a document is not accessed for more than 30m the entry will be removed from the table. That means the next access will go to a random node.

If one of the backend servers goes down HAProxy will stop sending requests to them, future requests that were previously being sent to that server will be sent to the others.

Having said that, it is the first time I use HAProxy and I still don’t have it running in production so take everything I say with a grain of salt :wink:

hi @FelipeTaiarol I was just wondering at what user count you ran into scaling problems. Was it the number of connections or the time needed for processing?

I saw some problems at 500 concurrent users actively writing to the document (load testing, not actual usage). I would like to be able to split the load between services not only for performance reasons but also for zero downtime deployments and things like that. The biggest problem is on startup, we have more than 20k documents already.