Running a single-user Mastodon instance

A lot of blogs and articles regarding setting up and optimising Mastodon seem to focus on scaling for many users. Yet, I did set up my instance specifically as a single user instance (though I may eventually open it up to my family members, but at the moment none of them seem interested), and there are a few things that I found very helpful when optimising my instance for this purpose. This article is meant to list these, primarily as a mental note to myself, should I need to re-build my server.

Server setup and system requirements

I got started by using DigitalOcean's 1-click-setup app for Mastodon. This was really quick and easy.

I chose the Premium Intel, 1vCPU, 2GB memory, 50GB storage droplet for $14 per month, and together with a 10 GB swap file this served me well as single user instance, until I installed Elastic Search. At that time I noticed a clear slowdown, and whilst I did stay on it for a few weeks after, and it was definitely stable, it was slow enough that I decided to upgrade to 2 vCPUs and 4 GB memory.

Block storage.

Using block storage for media storage is absolutely essential, even for single user instances. Doubly so if you want to connect to a relay, which you absolutely should (more on that later). My single user instance uses around 180 GB of media storage (and that's with aggressively clearing out the media cache).

Personally I use Backblaze B2 which costs me well under $1 per month. I've heard others using iCloud E2, which is even cheaper, but these guys are new, and I don't know if I can trust them.

AWS S3 was obviously the first block storage that came to mind, but they are far more expensive (I used them for about half a month, and paid about $6 just for that half month). The issue isn't so much storage cost, but that with S3 you pay for putting files there, and for deleting them, and Mastodon does a lot of both.

When using B2 I needed to add S3_READ_TIMEOUT=60 to my .env.production, because Backblaze seems to be a bit slower than other providers, but otherwise all seems to just work as normal.

Which software?

I originally ran mainstream mastodon, but I really wanted markdown support, so I then switched to glitch-soc. I love glitch-soc, it offers many features that really make mastodon much nicer to work with, but I modified it slightly since, by improving link previews: These now take a full width image, and display the target's logo, if it's one of the publications I recognise. I keep adding more supported publishers every now and again, and it overall makes my feed much more pleasant to browse.

You can view the code I'm currently running on GitHub: nanos/mastodon.


With a single user instance, federation will be a challenge: Unless you follow loads of people from different servers, you'll miss out on interesting posts. And unless loads of people from different servers follow you, your posts won't be seen by others.

Relays help with this, by basically connecting you to loads of instances, and mirroring their federated feeds. But this comes at a cost, especially for a single user instance: After I had connected to a handful of relays, I was pulling through about 1.5 - 2 million posts per month. Yet, I only actually looked at a few hundred per day. So that's a huge waste. If those millions of posts contain images or other media, it'll consume a lot of storage too.

There are now two great solutions to this, which I think work great in combination:

  1. FakeRelay together with GetMoarFediverse. These are brilliant: GetMoarFediverse is a small script that you run (either locally on your server, or via GitHub actions, or similar). It will scan remote instances that you specifiy (maybe,,, etc, depending on your interest) for any hashtags anyone on your instance follows. It will then feed these posts (and only them) into FakeRelay. You then subscribe to FakeRelay just as you would subscribe to any real relay, but the advantage is that it will now only pull through posts that will appear on your home timeline, so posts you are definitely interested in. It's a game changer.

  2. works quite similarly, but you have less control: you tell them which hashtags to feed into your relay, and connect this to your instance. Unlike the previous option it will not automatically adjust as you follow hashtags, or unfollow others, so it's potentially more work. They also most likely follow a lot more instances than you may set up with GetMoarFediverse, so you may see a lot more posts, which can be both a blessing and a curse - especially if you are prone to FOMO and feel you must read every post on your timeline.

Personally, I now use them both together, and I love it. I also still use one smallish relay to push my content out into the wider fediverse. I chose this because the admin is really responsive, and because it's neither too big to overwhelm my instance with loads of posts, nor too small, so my posts get seen.


When I originally set up my instance, I used LibreTranslate to translate posts. However, I have since switched to DeepL. Read my post on Translation options for Mastodon on why and how.

Puma optimisation

Puma is Mastodon's web server. Loads of optimisation tutorials seem to focus on how to help it deal with more users (understandably), but they don't make sense for a single user instance.

Maybe somewhat counter-intuitively the one thing that made the biggest impact to the performance of my single user instance was to turn off concurrency. Concurrency helps puma if multiple users browse your instance at the same time. But with just a single user, this wasn't needed, and consumed a fair bit of extra memory, which is limited on my instance. So, go ahead and insert WEB_CONCURRENCY=0 in your .env.production.


I originally split up sidekiq into multiple processes (one for each queue) to help speed up background processing.

This is widely recommended whenever you read about optimising mastodon, but it appears to be another thing that really only applies to larger instances. I’m running my mastodon instance on a small server with limited memory. Having multiple sidekiq processes running needs a lot of memory. And ultimately, although it did slightly improve processing of background tasks, it led to very significant performance degradation when actually using mastodon, both through the web interface, and through apps.

So, it appears that this is another aspect where optimising for a single user instance requires the opposite of optimising for a larger one. I would suggest not to split out your sidekiq into multiple processes, unless you have plenty of RAM to share. Instead I’m running sidekiq with the default configuration of 1 process and 25 threads.


I doubt you actually need this as a single instance. I certainly didn't see any change in performance. But I wanted to try it out.

I did follow this guide almost to the letter, so I won't reproduce this here.

Mostly this paragraph exists to help me remember the following:

Gotcha: You cannot use pgBouncer to perform db:migrate tasks. But this is easy to work around. If your postgres and pgbouncer are on the same host, it can be as simple as defining DB_PORT=5432 together with RAILS_ENV=production when calling the task, for example: RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate (you can specify DB_HOST too if it's different, etc).

Updating Mastodon / glitch soc

Again, this is mostly an internal note for me: When it comes to updating glitch-soc from upstream I need to firstly merge from glitch-soc/main. Then run pull and run the following commands:

git pull
bundle install && yarn install
RAILS_ENV=production DB_PORT=5432 SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rails db:migrate
RAILS_ENV=production bundle exec rails assets:precompile
RAILS_ENV=production bin/tootctl cache clear
RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate
systemctl restart mastodon-{web,streaming,sidekiq}

My .env.production

Just in case you are interested:

My crontab