Parse Server Migration on AWS

Parse Server Migration on AWS: With the upcoming Parse shutdown on January 28, 2017, you’re probably already thinking about what to do next. Luckily, Parse developers left its Parse Server as open source and have made it possible to migrate it to another platform. In this post, we’ll guide you through a seamless and cost effective Parse database migration on AWS and MongoDB.

About Parse

Parse was founded in 2011 by Tikhon Bernstam, Ilya Sukhar, James Yu, and Kevin Lacker. Parse is a SaaS (Software as a Service) backend provider for mobile and web applications. They offer services that help mobile developers store data in the cloud, manage identity log-ins, handle push notifications, and run custom code in the cloud. Parse was acquired by Facebook in the spring of 2013.
Parse allows you to easily build applications without building your own backend. It comes with out-of-the-box features such as:

  • Data storage in a document-oriented database
  • Files storage
  • User password and third-party authentication (for example with Facebook)
  • Push notifications
  • Apple In-App Purchases validation

Parse Database Migration Overview

The Parse Server migration on AWS process follows these 3 steps:

  1. Database migration
  2. Files migration
  3. Parse Server migration/deployment

Requirements:

  • AWS Account
  • MongoDB Cloud Manager Account
Migrate Parse DB to Self-Hosted MongoDB
Migrate Parse DB to Self-Hosted MongoDB

Database Migration on AWS

Parse Server uses MongoDB, so you will need to set-up your own MongoDB database to perform the migration. While there are several methods available, Parse developers recommend using mLAb for migration.
mLab is a fully managed cloud database service featuring automated provisioning and scaling of MongoDB databases, backup and recovery, 24/7 monitoring and alerting, web-based management tools, and expert support. mLab’s Database-as-a-Service platform powers hundreds of thousands of databases across AWS, Azure, and Google. It allows developers to focus their attention on product development instead of operations.
Nevertheless, if we observe mLab’s pricing, it may not be the most cost effective option. Instead, we’ll use MongoDB Cloud Manager to create three replica sets on our AWS account.
MongoDB Cloud Manager is a service for managing, monitoring, and backing up a MongoDB infrastructure. In addition, Cloud Manager allows Administrators to maintain a server pool to facilitate the deployment of MongoDB.
To start the process of database migration, we need to create a Deploy MongoDB three replica set on AWS using MongoDB Cloud Manager. For migrating Parse DB, official documentation advises us to keep in mind that the size of our MongoDB database must be at least three times the size of our current amount of data stored using Parse.
Replica set refers to a group of MongoDB instances that hold the same data. The purpose of replication is to ensure high availability in case one of the servers goes down. This reference deployment supports one or three replica sets. In the case of three replica sets, the reference deployment launches three servers in three different Availability Zones (if the region supports it). In production clusters, Amazon Web Services recommends using three replica sets (Primary, Secondary0, Secondary1). All clients typically interact with the primary node for read and write operations.

MongoDB Cluster on AWS with Three Replica Sets
MongoDB Cluster on AWS with Three Replica Sets

 

Low Data Transfer Rate High Data Transfer Rate
Parse Data Storage < 4GB m4.large, 40GB m4.xlarge, 80GB
Parse Data Storage > 4GB m4.large, 80GB m4.xlarge, 80GB

 Instance size recommendation

Deploy MongoDB replica set on AWS using MongoDB Cloud Manager

  1. Login to your MongoDB Cloud Manager account
  2. Click “Build New” and chose Amazon Web Services under New Cloud Servers
  3. Chose “us-east-1” region and provide Access Key Id and Secret Access Key for your AWS account
  4. Click “Create Replica Set”, enter replica set name and click Continue
  5. Use Cloud Manager to back up your data, click Yes
  6. Configure your EC2 Instances (see Instance size recommendation table)
  7. Click “Provision Servers” and wait. Once three server automation agents are visible, click “Continue”
  8. Click “Advanced Setup” in the top right. Then click the wrench icon on the top right of the Replica Set topology view to open the Replica Set editor
  9. Modify the version to 3.0.x latest and click “Apply”
  10. Click “Review & Deploy”, then “Confirm & Deploy”
  11. Expand “Advanced Options”, and click the “Add Option” button (Add the option Startup Option: “failIndexKeyTooLong”; Value: false)
  12. Click “Apply”, and again “Review and Deploy” your changes
  13. Click “Add user” and create a new user with the specs below

database: “admin”
username: “<yourusername>”
roles: root@admin

Connect MongoDB Cloud manager with your AWS account
Connect MongoDB Cloud Manager with your AWS account

Now you can test the connection to your MongoDB Replica Set.
MongoDB_ReplicaSet_URI:

mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]

Once we’re connected to our MongoDB three replica set, we need to create a user that will be utilized for migrating our data from Parse to our MongoDB. We can create the user with the following commands, and it’s mandatory for the user to have these exact roles enabled.

use database_name
db.createUser({ user: "parse", pwd: "password", roles: [ "readWrite", "dbAdmin" ] })

After we’ve created the MongoDB three replica set and our user, we need to export our data from Parse into the MongoDB replica set.

Export Data from Parse into MongoDB

Parse App Management
Parse App Management

 

  1. Go to the new Parse dashboard > “App Settings” and click on “Migrate to external database”
  2. Provide the MongoDB_ReplicaSet_URI
  3. Click the “Begin the migration” button
  4. Ensure that the data is written to your MongoDB Replica Set.
  5. Once you’re satisfied with the database migration on AWS, you can finalize the transfer in the migration UI
Database migration finished
Database migration finished

NOTE: It is important to note that the Finalize action is irreversible. You will not be able to switch back to the Parse managed database. You will be responsible for any backups and recovery of your data. Only proceed once you have backed up your new database.

Files Migration

Parse Objects can have File fields that allow you to store files such as those for images or pdf documents. Parse keeps files in its own AWS S3 Bucket and Mongo database only stores references (URLs) to them. Migrating a database as described above copies only database data and file references, but not the files themselves. You must move the files to your storage before January 28, 2017 or they will be deleted.
We recommend storing files on S3 bucket in your AWS account.

Parse Server Migration on AWS/Deployment

If we consider that Elastic Beanstalk supports Node.js and enables automatic infrastructure scaling, we see it as the best choice for Parse Server migration on AWS.

To launch Parse Server on AWS Elastic Beanstalk click the following link. This will launch the AWS Elastic Beanstalk deployment flow.

AWS Elastic Beanstalk deployment flow
AWS Elastic Beanstalk deployment flow

Once we’ve created the Elastic Beanstalk environment, we need to deploy our application on it. We’ll do it using eb deploy from the root folder of our application. In order to use the eb deploy command, we need to have ebs cli (elastic beanstalk command line interface) installed on our local machine.
When ebs cli install is complete, in case we didn’t have it installed, we’ll enter the eb init command and select the us-east -1 (N Virginia) region. After that, we will proceed by choosing the name of our application (the name we used for creating the Elastic Beanstalk environment).
Before deploying, we need to create the .ebextesnions folder. In this folder, we’ll create a 01app.config file that will pass the following configuration commands to our Elastic Beanstalk:

option_settings:
 aws:elasticbeanstalk:application:environment:
    PARSE_MOUNT: "/parse"
    APP_ID: <your app id>
    MASTER_KEY: <your master key>
    DATABASE_URI: <your database uri>
    NODE_ENV: "production"
    SERVER_URL: <your server url>
  aws:elasticbeanstalk:container:nodejs:
    NodeCommand: "npm start"
Elastic Beanstalk Container Options
Elastic Beanstalk Container Options
Elastic Beanstalk Environment Properties
Elastic Beanstalk Environment Properties

By using the 01app.config file, we passed the necessary ENVIRONMENT variables and “npm  start” command to our Elastic Beanstalk environment.
If you want to learn more about configuration files for Elastic Beanstalk, please check the following link.
Another way to deploy our application is to take its source code, pack it into a .zip file, and upload it to the S3 on our Elastic Beanstalk environment.

Parse Dashboard

After Parse Server became open source, it was unclear how to manage an application running on your own system. This is where we introduce Parse Dashboard. It will help you manage the apps that you have already moved to Parse Server, your apps that are still on Parse.com, and the apps that are still in development and running on Parse Server on your development machine. You can even manage them all from the same dashboard.
Once the Parse Server migration on AWS is complete, it’s necessary to deploy Parse Dashboard on Elastic Beanstalk.
To deploy Parse Dashboard on Elastic Beanstalk, we need to download Parse Dashboard application code, then initialize the Elastic Beanstalk environment. Then, we will execute the deploy using the eb deploy command. We’ll use the config file inside the .ebexstensions to start the Parse Dashboard and push the necessary data for communicating with our application.
To securely deploy the dashboard without leaking your app’s master key, you will need to use HTTPS and Basic Authentication. You can configure your dashboard for Basic Authentication by adding usernames and passwords to your parse-dashboard-config.json configuration file.

option_settings:
  aws:elasticbeanstalk:application:environment:
    APP_ID: <your app id>
    MASTER_KEY: <your master key>
    APP_NAME: "<your app name>"
    NODE_ENV: "production"
    SERVER_URL: <www.yourparseapp.com>/parse
  aws:elasticbeanstalk:container:nodejs:
    NodeCommand: "npm start dashboard"
Parse Dashboard
Parse Dashboard

Wrap Up

The Parse shut down doesn’t mean the extinction of the apps that were using it. Even Facebook said that they wouldn’t necessarily call it a “shut down”. Instead, they see it as a way of entrusting Parse to the community in the form of an open source Parse Server.
Surely, some time will pass before we see the community fully included in the development of Parse Server, especially when it comes to developing certain functionalities that will be missed after its shut down.
Nevertheless, it’s quite certain that we will still be using Parse Server even after its official “shut down” and in the years to come.
Naturally, the shut down isn’t painless and will create some short term issues due to necessary migrations. However, the availability of the Parse Server code to the community can benefit us in the future.

So, goodbye Parse, and Hello Parse Server.

 

Avatar

Written by

Dzenan Dzevlan

Certified AWS Solutions Architect Associate and SysOps, highly experienced at producing multi-tier, high-availability infrastructure architectures in an enterprise & AWS cloud environment adhering to DR, Business Continuity, availability and security best practice. Founder and co-organizer of the first AWS User Group in Bosnia and Herzegovina .


Related Posts

Valery Calderón Briz
Valery Calderón Briz
— October 22, 2019

How to Go Serverless Like a Pro

So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...

Read more
  • AWS
  • Lambda
  • Serverless
Avatar
Stuart Scott
— October 16, 2019

AWS Security: Bastion Host, NAT instances and VPC Peering

Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...

Read more
  • AWS
Avatar
Sudhi Seshachala
— October 9, 2019

Top 13 Amazon Virtual Private Cloud (VPC) Best Practices

Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...

Read more
  • AWS
  • best practices
  • VPC
Avatar
Stuart Scott
— October 2, 2019

Big Changes to the AWS Certification Exams

With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...

Read more
  • AWS
  • Certifications
Alisha Reyes
Alisha Reyes
— October 1, 2019

New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more

At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • ITIL® 4
  • Jenkins
  • Microsoft 365 Tenant
  • New content
  • Product Feature
  • Python programming
  • TOGAF® 9.1
Avatar
Stuart Scott
— September 27, 2019

AWS Security Groups: Instance Level Security

Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...

Read more
  • AWS
  • instance security
  • Security
  • security groups
Avatar
Jeremy Cook
— September 17, 2019

Cloud Migration Risks & Benefits

If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...

Read more
  • AWS
  • Azure
  • Cloud Migration
Joe Nemer
Joe Nemer
— September 12, 2019

Real-Time Application Monitoring with Amazon Kinesis

Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information.  With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...

Read more
  • amazon kinesis
  • AWS
  • Stream Analytics
  • Streaming data
Joe Nemer
Joe Nemer
— September 6, 2019

Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination

Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— September 3, 2019

Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison

Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...

Read more
  • Amazon Rekognition
  • AWS
  • Google Cloud Platform
  • Google Vision
Alisha Reyes
Alisha Reyes
— August 30, 2019

New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…

As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • New content
  • Product Feature
  • Python programming
Joe Nemer
Joe Nemer
— August 27, 2019

Amazon Route 53: Why You Should Consider DNS Migration

What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...

Read more
  • Amazon
  • AWS
  • Cloud Migration
  • DNS
  • Route 53