Backing up your Firebase (3.x) database to AWS S3

When you want to back up your Firebase database manually, you can export it and download it to your computer. This, however, isn’t very convenient if you want to regularly back up your database, or if you are looking for an automated process. I will describe below, how I do a daily backup of my database to an AWS S3 bucket.

My database is quite small (less than 1 mb at the time of writing). Once your database starts reaching a certain point this approach might not be feasable.

Setup & Prerequisites

Create a new directory on your computer and run npm init. Make sure your dependencies matches the ones below (I use babel to transpile my ES2015 syntax, if you use an older ES syntax you don’t have to include the babel devDependencies):


Now run npm install.

Next up, go to your firebase console and create a new credential file and save it to your working directory.

Next, make sure you have a working AWS account and a bucket created for this project. Once you have created a bucket, go to your IAM console (Identity and Access Management) and create a new user, make sure you download the credential file for the user. That user needs to have full access to your newly created bucket. A ‘quick and dirty’ way of doing this is to go IAM > Users > NewUser select the ‘Permissions’ tab, and click ‘Attach Policy’ and give it the ‘AmazonS3FullAccess’ policy.

That’s it. The project should be set up and ready.

Hooking up the services

Now we need to load our configuration files and authenticate with our two services (Firebase and AWS)

Pull in these 4 dependencies

Everything below is in index.js

then load the files and create the necessary references

Run the project (npm run dev) and make sure there is no errors.

Now we can write the backup function which will do the following:

  1. Download the entire database
  2. Create a filename for the file
  3. Stringify the payload
  4. Write the file to disk (I do this so I have the file backed up on my server as well)
  5. Send it to S3

The function looks like this

The S3 upload function works like this:

  1. Create an S3 reference with the Bucket and Key (filename)
  2. Create a read stream from the file that S3 can work with
  3. Initiate the upload
  4. Monitor the upload

And that’s it for a basic upload function. If you run this you should see the backup initiating and a JSON file appearing in your S3 bucket.

This isn’t very automated though. Let’s fix that.

Automating the process

The automation part is quite easy. We’ll make use of the node-scheduler package to run our initiateBackup() function. You can find the repo here and read more about the syntax for setting the time schedule.

My very simple schedule runs my backup every day at 23:59.

And that’s it. As simple as.

This very basic node worker helps me backup my database everyday.

I would love to hear your comments on how I could improve this or make it better.

Leave one below



Leave a Reply

Your email address will not be published. Required fields are marked *