Running a single-user Mastodon instance

A lot of blogs and articles regarding setting up and optimising Mastodon seem to focus on scaling for many users. Yet, I did set up my instance specifically as a single user instance (though I may eventually open it up to my family members, but at the moment none of them seem interested), and there are a few things that I found very helpful when optimising my instance for this purpose. This article is meant to list these, primarily as a mental note to myself, should I need to re-build my server.

NB: This post has been updated throughout and now details all aspects of how I run my Mastodon Instance as of today, 1st June 2023.

Server setup and system requirements

I got started by using DigitalOcean's 1-click-setup app for Mastodon. This was really quick and easy.

However, it's also quite expensive. So I've since switched to a 'CAX11' server from Hetzner. This has 2 vCPUs and 4GB RAM, and costs me €3.95 per month. I've not attached an IPv4 address, as Hetzner charges for these.

In order to allow communication with IPv4 only hosts, I therefore had to do the following:

  1. For outbound traffic, apply's name servers:
    1. Open up /etc/netplan/ and replace the name servers with the ones given at
    2. Create a file at /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following content, to stop cloud-init from messing with this:
  config: "disabled"
  1. For inbound traffic I'm just using CloudFlare to proxy all HTTPS traffic.

For detailed instructions read Gervasio Marchand's Installing Mastodon on an IPv6-only host running Ubuntu 22.04

Other than that, I just followed Mastodon's very good setup guide.

Block storage.

Using block storage for media storage is absolutely essential, even for single user instances. Doubly so if you want to connect to a relay, which you absolutely should (more on that later). My single user instance uses around 120 GB of media storage (and that's with aggressively clearing out the media cache).

Personally I use Backblaze B2 which costs me around $1 per month including taxes. I've heard others using iCloud E2, which is even cheaper, but these guys are new, and I don't know if I can trust them.

AWS S3 was obviously the first block storage that came to mind, but they are far more expensive (I used them for about half a month, and paid about $6 just for that half month). The issue with AWS isn't so much storage cost, but that with S3 you pay for putting files there, and for deleting them, and Mastodon does a lot of both.

Setting up Backblaze B2 for Mastodon

There are a few peculiarities to keep in mind, when setting up Backblaze B2 for Mastodon, after you've created your bucket:

Firstly, in your bucket's Lifecycle settings, make sur you select 'Keep only the last version of the file'. Otherwise your media cache will never be deleted, and your storage will increase indefinitely.

Secondly, you'll want to proxy your bucket through a CDN provider. Backblaze have a great and detailed guide on configuring CloudFlare for B2, which is what I'm doing.

Finally, to configure Mastodon to use your B2 bucket, open up your .env.production file and add the following lines:

S3_REGION=eu-central-1 # replace this, if you are in a different region
S3_ENDPOINT= # replace this, if you are in a different region # replace this, if you are in a different region
S3_READ_TIMEOUT=60 # I needed to set this to ensure I'm not getting timeouts.
S3_BUCKET={your bucket name}
S3_ALIAS_HOST={your host name that you use to proxy your media data through your CDN}
AWS_ACCESS_KEY_ID={your keyID from B2}
AWS_SECRET_ACCESS_KEY={your applicationKey from B2}

Which software?

I originally ran mainstream mastodon, but I really wanted markdown support, so I then switched to glitch-soc. I love glitch-soc, it offers many features that really make mastodon much nicer to work with, but I modified it slightly since, by improving link previews: These now take a full width image, and display the target's logo, if it's one of the publications I recognise. I keep adding more supported publishers every now and again, and it overall makes my feed much more pleasant to browse.

You can view the code I'm currently running on GitHub: nanos/mastodon.

GitHub Repository

If you are running a fork of Mastodon, and have uploaded it to GitHub, you can change the 'View source' link that's visible in the bottom left of the web interface, to point to your fork, instead of the main mastodon repository.

Simply add the following line to your .env.production:

GITHUB_REPOSITORY=nanos/mastodon # replace with your repository, obviously

Increasing maximum post length

This one is glitch-soc only: If you are running the glitch-soc fork, you can easily increase the maximum post length on your instance, by adding this line to your .env.production:


If you are running vanilla Mastodon, you need to modify the source code, to achieve the same.


With a single user instance, federation will be a challenge: Unless you follow loads of people from different servers, you'll miss out on interesting posts. And unless loads of people from different servers follow you, your posts won't be seen by others.

Relays help with this, by basically connecting you to loads of instances, and mirroring their federated feeds. But this comes at a cost, especially for a single user instance: After I had connected to a handful of relays, I was pulling through about 1.5 - 2 million posts per month. Yet, I only actually looked at a few hundred per day. So that's a huge waste. If those millions of posts contain images or other media, it'll consume a lot of storage too.

There are now two great solutions to this, which I think work great in combination:

  1. FakeRelay together with GetMoarFediverse. These are brilliant: GetMoarFediverse is a small script that you run (either locally on your server, or via GitHub actions, or similar). It will scan remote instances that you specifiy (maybe,,, etc, depending on your interest) for any hashtags anyone on your instance follows. It will then feed these posts (and only them) into FakeRelay. You then subscribe to FakeRelay just as you would subscribe to any real relay, but the advantage is that it will now only pull through posts that will appear on your home timeline, so posts you are definitely interested in. It's a game changer.

  2. works quite similarly, but you have less control: you tell them which hashtags to feed into your relay, and connect this to your instance. Unlike the previous option it will not automatically adjust as you follow hashtags, or unfollow others, so it's potentially more work. They also most likely follow a lot more instances than you may set up with GetMoarFediverse, so you may see a lot more posts, which can be both a blessing and a curse - especially if you are prone to FOMO and feel you must read every post on your timeline.

Personally, I now prefer FakeRelay/GetMoarFediverse, as it's less problematic in terms of permissions / consent, but is slightly easier to set up. However, ultimately both of these do make sure that content I'm interested in makes it to my server, without overwhelming the server with content I'm not interested in. I also still use one medium sized relay to push my own content out into the wider fediverse. You can get a list of good relays at


Particularly if you are using relays and follow hashtags, you'll quickly come across the issue that you will be missing replies to posts in your home timeline. Additionally, the authors' profiles will often be empty.

FediFetcher is a really neat solution to this problem, that'll help you fetch all of those missing posts into your instance.

I might be a bit biased here, but I really cannot overstate just how huge a difference FediFetcher made to my enjoyment of Mastodon. In my opinion it's a must for small instances.

Single User Mode

Mastodon has a single user mode. Activating that mode will disable registrations and redirect frontpage to your profile.

To enable it, add the following line to your .env.production:



When I originally set up my instance, I used LibreTranslate to translate posts. However, I have since switched to DeepL. Read my post on Translation options for Mastodon on why and how.

Full text search

I have also configured full text search for my Mastodon instance, so that I can search through posts by those who've opted into it. Read my post on Setting up Elasticsearch for Mastodon 4.2.x for details.

Tidy-up tasks

If you aren't careful, Mastodon media storage will rise indefinitely: By default, Mastodon grabs a copy of each and every media file (image, video, link preview from posts, profile images, etc.) it encounters, and stores these forever more. As such, your storage would literally increase indefinitely.

Mastodon offers the option to clear out this cache regularly, and you should definitely avail yourself of that. Since version 4 you can actually control this in the UI at Administration > Server Settings > Content retention > Media cache retention period, but I don't trust this, because I don't really know which caches this actually clears (Mastodon has 3 different caches to clear out). So I'm running the following commands as a cronjob, to clear out media cache older than 7 days:

0 1 * * * S3_READ_TIMEOUT=60 RAILS_ENV=production /home/mastodon/live/bin/tootctl preview_cards remove --days 7 # regularly purge link preview images from cache
0 0 * * * S3_READ_TIMEOUT=60 RAILS_ENV=production /home/mastodon/live/bin/tootctl media remove --days 7 --remove-headers # regularly purge header images from cache
0 2 * * * S3_READ_TIMEOUT=60 RAILS_ENV=production /home/mastodon/live/bin/tootctl media remove --days 7 # regularly purge media from cache

Puma optimisation

Puma is Mastodon's web server. Loads of optimisation tutorials seem to focus on how to help it deal with more users (understandably), but they don't make sense for a single user instance.

Maybe somewhat counter-intuitively the one thing that made the biggest impact to the performance of my single user instance was to turn off concurrency. Concurrency helps puma if multiple users browse your instance at the same time. But with just a single user, this wasn't needed, and consumed a fair bit of extra memory, which is limited on my instance. So, go ahead and insert WEB_CONCURRENCY=0 in your .env.production.


I originally split up sidekiq into multiple processes (one for each queue) to help speed up background processing.

This is widely recommended whenever you read about optimising mastodon, but it appears to be another thing that really only applies to larger instances. I’m running my mastodon instance on a small server with limited memory. Having multiple sidekiq processes running needs a lot of memory. And ultimately, although it did slightly improve processing of background tasks, it led to very significant performance degradation when actually using mastodon, both through the web interface, and through apps.

So, it appears that this is another aspect where optimising for a single user instance requires the opposite of optimising for a larger one. I would suggest not to split out your sidekiq into multiple processes, unless you have plenty of RAM to share. Instead I’m running sidekiq with the default configuration of 1 process and 25 threads.


I doubt you actually need this as a single instance. I certainly didn't see any change in performance. But I wanted to try it out.

I did follow this guide almost to the letter, so I won't reproduce this here.

Mostly this paragraph exists to help me remember the following:

Gotcha: You cannot use pgBouncer to perform db:migrate tasks. But this is easy to work around. If your postgres and pgbouncer are on the same host, it can be as simple as defining DB_PORT=5432 together with RAILS_ENV=production when calling the task, for example: RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate (you can specify DB_HOST too if it's different, etc).

Updating Mastodon / glitch soc

Again, this is mostly an internal note for me: When it comes to updating glitch-soc from upstream I need to firstly merge from glitch-soc/main. Then run pull and run the following commands:

git pull
bundle install && yarn install
RAILS_ENV=production DB_PORT=5432 SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rails db:migrate
RAILS_ENV=production bundle exec rails assets:precompile
RAILS_ENV=production bin/tootctl cache clear
RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate
systemctl restart mastodon-{web,streaming,sidekiq}

If bundle install complains about rbenv: version x.x.x is not installed, run rbenv install x.x.x.