Replies: 2 comments
-
FWIW, I made an experiment by replacing the lru-cache in cache.ts with an IndexedDB. This would have been a very clean injection of storage-based, more persistent local caching instead of a 1000-item memory-resident cache. However, what I learned doing it:
So much for a quick win! In order for the cache to really be useful to improve the responsiveness of the app, it would have to:
I'm of half mind between trying to proxy the entire masto API for this (which would leave the rest of the app pretty much untouched, but contain leaky abstractions and unforeseen consequences), or try to significantly extend the cache API and do major surgery to several parts of the app to remove the direct use of the masto API from the points where its most likely the points above would help. As I have no previous experience with the app nor insight into how this has been thought to work, would welcome guidance at this point. (*) on indexing posts and their order on the timeline: My previous experiment on this was with Pinafore, which indeed has a two-level IndexedDB cache, but also indexes (and orders) everything by the native Mastodon API's status ids. This is great for simplicity, but sucks for trying to do a smart reordered timeline, since the ids are not really timestamps even though they are time-sequential. There are a few different ways to tackle this, and they relate to how much is in the plans for reorderedTimeline... |
Beta Was this translation helpful? Give feedback.
-
I could not come up with a clean solution for this issue and will have to leave it for someone better skilled with Nuxt apps. I think my notes above are still valid, but I just could not make any progress with the work, getting bogged down in the details of how the paginator works. I do wonder though, isn't Nuxt Store precisely the kind of solution that should be sitting in between the timeline UI and the timeline API to give it local persistence and decoupling from network latency? |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for the opportunity to see this early (and of all the great work done!)
I come to Elk after experimenting with a number of other PWA clients, but for this discussion, particularly Pinafore. Before I learned of Elk, I started hacking on Pinafore to add some capabilities to it Elk apparently already has, and to do that, had to deep dive into its quite confusing (to me, anyway) data architecture. For the purpose of this note, and admittedly not yet having dived into Elk's implementation:
Pinafore tries to avoid in-memory objects as far as possible. To do that, it does three things:
The impact of this is hard to notice on desktop, but it is quite noticeable on a mobile device. Pinafore hardly ever experiences any lag or jank, apart from the obvious cases where it needs to load more content from the server (but even then, the UI isn't frozen - you simply need to wait for the spinner to disappear). In comparison on the same device, Elk will at first feel quite responsive, but upon scrolling through the timeline and opening a few longer discussion threads, it fairly rapidly starts to become sluggish.
I'm guessing at this point that the culprit is pretty high memory load, overwhelming what's available to a PWA on a mobile device, and wondering whether the same approach of using IndexedDB to maintain toots to reduce memory pressure would help.
Beta Was this translation helpful? Give feedback.
All reactions