SnitchFeed is slowly, but surely, gaining traction on Google, ChatGPT and other LLMs, and user growth is slowly ramping up (despite me not actively marketing it yet)—which is great to see. In fact, we just crossed 200 signups :)
Slow Database Queries
But this also means the database has grown significantly. The database holds hundreds of 1000s of mentions, all scored for relevance, sentiment and smartly tagged with AI. While this is not a big number for Postgres to handle, it has a significant impact on the performance of complex queries that have multiple joins.
One such query was the orgUsage
calculation, a gnarly 99-line query. It loads the mentions, listeners, platforms and features limits for the user based on the plan they were subscribed to. This query would run every time a route was refreshed—basically on first visit and when updating something (like a Listener definition, for example).
Typically this query takes somewhere between 2-4 seconds depending on how many mentions the user has had in the current billing period. But for users with limits of 100k mentions/mo, this would sometimes take up to 16 seconds! Which meant the page would sometimes take 16+ seconds to load—no bueno at all!
Luckily, we caught this before churn came into play.
From as High as 16 Seconds to 0.9 Seconds
Initially, I investigated a way to make this query more efficient. After a bit of trial and error, it was clear that optimizing this isn't going to give us the performance gain we're looking for.
So I turned to Redis.
The advantage of Redis is that you can use it to share data that needs to created/updated/read by multiple apps, and because it is an in-memory store, it is fast. Which is exactly what we're looking for.
The plan was to basically store a user's plan limits, current usage and frequently accessed details in Redis and update it every few seconds. This had to be done on a backend long-running service. SnitchFeed backend runs on Node, which fortunately has first-class support for caching on Redis.
So I implemented exactly that. One big advantage of running this on the backend was that we are already storing a lot of the data in local memory; data that we would have to access via SQL joins (which are costly time-wise). Because of this the query became much simpler and cheaper to run.
Now it's a simple cron-job that runs every 10 seconds and updates orgUsage
for each org in Redis. This Redis instance is then shared with the NextJs webapp.
Now, every time a user logs in or updates anything in the app, the page loads in less than 900 ms. Compare that to 16 seconds—that's an 18x increase in speed.
Huge win! 🎉
The Result: An App That's Snappy and a Joy to Use
With Redis as a caching layer, the app now feels blazingly fastâ„¢. Navigation is much smoother and you will no longer be frustrated by having to wait for a page to load.
After all, great user experience is one of the selling features of SnitchFeed.
Looking Ahead
Now that I have established a caching layer for SnitchFeed, implementing cool new features that require frequently accessed data will be much easier (and snappier to use).
Here's what we have on the roadmap:
- Improved relevance criteria definitions
- Relevance score sliding scale for Slack, Email or Discord alerts
- Weekly and monthly summaries of your social listening performance
- Generate draft replies for mentions based on your context
If you have any feature requests, or something you would love to see in the social listening space, please let us know via the chat or by emailing us!
Thanks for reading—I know this article isn't directly related to using SnitchFeed, but posting updates like these helps me realize how far SnitchFeed has come :)