I’m also seeing the same super slow typing performance on Chrome under Mac OS X with very large documents loaded. Based on profiling I doubt there is much Prosemirror could do to mitigate this, as the vast majority of the time seems to be spent in browser code outside of the javascript runtime. An occlusion hack as previously suggested would probably help given setting display:none on the majority of the document brings typing speed back to mostly acceptable.
Hopefully the Chrome folks can get a fix for this soon, as it is currently making large document editing pretty much unusable on Chrome under MacOSX. Safari performs very well on the same document (and same machine). Firefox isn’t great, and probably borderline acceptable, but certainly a good deal better than Chrome. In my context, I can probably get away with not supporting Safari and Firefox, so I’m not that worried about poor Firefox performance.
I found that the framerate was very low in Chrome (both OSX and Linux) when the contenteditable node contains a large number of DOM nodes, and the editor is in focus (no need to type). Confirmed, it can be worked around quite effectively by applying content-visibility: auto; contain-intrinsic-size <stable-last-rendered-size>; to elements that wrap smaller ones.
I am very thankful at this point that the prosemirror document is an immutable object, so its easy to identify when the content of a node has changed and its size needs to be recalculated.
Hey. Were you ever able to figure this out? I wont be able to easily produce a gist (dont want to copy code base in) but could help step you through the main ideas.
@philippkuehn
Some of our users like to use this, but, I’ll run it by management and see what they think. However, I have noticed this is much worse with foreign languages with crazy unicode characters and accents. I’ve had data sometimes where when I right click on the page, it just freezes.
Did you run a performance test in code? How much faster are we talking? Will experiment with some larger documents later m, just was curious about your benchmarks.
The problem is the height of blocks is dependent on a full DOM render, the way I see it. Most the cost to produce some hidden visibility element with a precise height is incurred when calculating that height? But I presume some improvement if those values could be strategically cached.
That algorithm of yours might also work for eliminating/mitigating logic relating to node views too, though.
Also, while scrolling, the transaction cost to update entire decoration set should be load tested compared to current problem when typing in large documents. The larger the document, the longer the array of heights to iterate. Would need a debounce and there will be a slight hit on UX due to “loading placeholders”.
In short, I’d consider it a viable experiment for prosemirror occlusion culling. Might or might not pan out.
The problem is the height of blocks is dependent on a full DOM render, the way I see it. Most the cost to produce some hidden visibility element with a precise height is incurred when calculating that height? But I presume some improvement if those values could be strategically cached.
We can use some unique spec of decoration to cache those values.
Also, while scrolling, the transaction cost to update entire decoration set should be load tested compared to current problem when typing in large documents. The larger the document, the longer the array of heights to iterate. Would need a debounce and there will be a slight hit on UX due to “loading placeholders”.
There is no need to update entire decoration set while scrolling, just do some tricks to those nodes around the viewport. I just add content-visibility: visible to nodes around the viewport, and content-visibility: hidden to those not, because most of nodes will remain invisible(same) while scrolling, some unique spec make most decorations can be reused.
I used the strategy, and make typing Chinese not suffer performance issue in editor has over 100,000 doms. But a few special computers of my colleagues(same hardware) are just slower than expected. There still will be 2-3s delay after they input 10 Chinese words, and I can not figure out the reason why such time is spent in system self, and do nothing.
Hi, using tiptap vs2 with Chrome 92 (Linux). Have basically the same issue, at around 1.000 - 2.000 words editor becomes quite laggy, from 4.000 words onwards it is very laggy.
Did anybody find a workaround except of switching off spellchecking?
For anyone that disabled spell checking in Chrome OSX due to performance issues, it might be worth trying again, as some of the performance regressions were addressed in Chrome 97:
Finally, we use display: none to replace content-visibility for stability and browser compatibility issues.
The result of the optimization is acceptable to us.