-
Notifications
You must be signed in to change notification settings - Fork 1
Advanced Deployment
Cloud administrators can manually specify which machines should host each of the critical services in AppScale. This document outlines how to use this service to explore alternative deployment schemes that may give your user's applications potentially better performance or make it more fault tolerant.
In order to host your applications, AppScale employs the following components:
- Load balancer: The Ruby on Rails web service that routes users to their Google App Engine applications. It also hosts a status page that reports on the state of all machines in the currently running AppScale deployment.
- App Engine: Our modified version of the non-scalable Google App Engine SDKs that adds in the ability to store data to and retrieve data from databases that support the Google Datastore API. Throughout this document, we use the phrases "App Engine" and "AppServer" interchangeably - this is because internally our modified version of the App Engine SDK is called the AppServer.
- Database: Runs all the services needed to host the chosen database.
- Memcache: Provides caching support for App Engine applications.
-
Login: The primary machine that is used to route users to their Google App Engine applications. Note that this differs from the load balancer - many machine can run load balancers and route users to their applications, but only one machine is reported to the cloud administrator when running
appscale upandappscale deploy. - ZooKeeper: Hosts metadata needed to implement database-agnostic transaction support.
- Shadow: Queries the other nodes in the system to record their state and ensure that they are still running all the services they are supposed to be running.
- TaskQueue: Implements Task Queue API support via the RabbitMQ message bus service and celery scheduling service.
- Open: The machine runs nothing by default, but the shadow machine can dynamically take over this node to use it as needed.
The default deployment employs an `ips_layout' in your AppScalefile that resembles the following:
controller: 192.168.1.2
servers:
- 192.168.1.3
- 192.168.1.4
- 192.168.1.5
This deployment employs a controller and a number of servers. These "aggregate roles" each run a number of roles in the system:
- Controller: Shadow, load balancer, database, login, and ZooKeeper
- Servers: App Engine, database, and load balancer
- Master (not shown here): Shadow, load balancer, and ZooKeeper
Change your `ips_layout' to specify exactly what machines should run which services in your deployment. Here's an example:
master: 192.168.1.2
appengine:
- 192.168.1.3
- 192.168.1.4
database:
- 192.168.1.5
Also, as no machine has been specified as the login node, the master node automatically takes up this role. Therefore, one node (192.168.1.2) routes users to their App Engine applications, hosted at 192.168.1.3 and 192.168.1.4. Furthermore, in this deployment, these nodes only host App Engine applications, and not also databases as was the case in the standard deployment. Finally, one machine (192.168.1.5) hosts the database in the system.
Let's look at another example:
master: 192.168.1.2
appengine:
- 192.168.1.3
- 192.168.1.4
database:
- 192.168.1.3
- 192.168.1.4
login:
- 192.168.1.5
In this example, one node (192.168.1.5) routes users to their App Engine applications and performs no other functions. Two nodes (192.168.1.3 and 192.168.1.4) host App Engine applications and host the chosen database. Finally, one node (192.168.1.2) queries the other nodes in the system to ensure they are running properly and handles transactions via ZooKeeper.
But how do you use this placement support in Eucalyptus and Amazon EC2? It's simple - just replace each of the IPs in your ips_layout with node-X (where X is an integer). Here's an example using the standard deployment:
controller: node-1
servers:
- node-2
- node-3
- node-4
And here's the second example again, but modified for use on cloud infrastructures:
master: node-1
appengine:
- node-2
- node-3
database:
- node-4
And for completeness, here's the third example once more:
master: node-1
appengine:
- node-2
- node-3
database:
- node-2
- node-3
login:
- node-4
Some databases (Cassandra) run in a peer-to-peer fashion, so all database nodes are considered equal. But others employ some type of master-slave relationship - how then do we specify which node is the database master and which are the database slaves? That's simple - the first database node specified is always the database master.
As AppScale is designed to be a platform for experimentation, this support allows cloud administrators to possibly gain better performance (or worse performance) as well as more or less fault tolerance in their system. Let's examine this through a familiar example:
master: node-1
appengine:
- node-2
- node-3
database:
- node-4
Here, performance is likely to be better under lower load due to having only one database - many of the internal agreement protocols are vastly simplified when only one node is involved. However, this node is now a single-point-of-failure in the system - if it goes down, users won't be able to read or write data.
Let's look at another familiar example, the default deployment:
controller: node-1
servers:
- node-2
- node-3
- node-4
This deployment gives us four database nodes (one for each node in the system) and three App Engine nodes - giving us vastly better fault tolerance than in the previous deployment. Of course, this is only with respect to App Engine nodes and database nodes - the shadow (specified as master in the previous example and controller in this example) is still a single-point-of-failure. Ongoing work is looking to alleviate this problem.
This document outlines a number of ways by which cloud administrators can manually specify where AppScale's critical services should run. Explore the various deployment options that are now available, let us know what you're using, and enjoy!