LOW-CODE
PREFABS
SOLUTIONS
EVALUATE

Wired for experience®

Docker and container technology are well-known in Enterprise today. The simplified view of containers as miniaturization of VMs seems to yield benefits of portability and faster startup times. But what is less apparent is the benefit they bring to the business. To understand this, we must first look at various scenarios in which the technology can be applied. Just as Java technology applied to IoT or Android is different from that applied to Enterprise software, the benefits realized from any technology, along with its challenges, vary depending upon the context of its application.

In this post, we'll explore a couple of contexts in which container technology can be applied and how its benefits and challenges differ.

Containers for infrastructure optimization

This is the most common context. Here, containers are adopted by IT as a form of software packaging and distribution. Typically, IT expects to be provided with containers instead of application binaries by the development teams. So containers act as a sort of black box that contains all the software and its dependencies. Developers require to package and deliver a set of container images along with relevant configuration files--that describe how these containers may talk to each other (ports), what storage needs they have (volumes), and so on. From an IT stand-point this creates a homogenous black-box approach to deploying pretty much anything in the Enterprise, and this makes it especially suited to large, data-center scale deployments.

In this condition, the application and adoption of container technology is largely IT-oriented. It favours IT over developers as the latter need to do a lot of heavy-lifting--converting their app binaries and dependencies into container images and pushing them into a container registry. Most container management platforms out there focus on providing the right tools to IT to pull those images from a registry and provision them on a set of machines (physical or virtual). The focus of such platforms is purely on run-time aspects, such as container orchestration, with very little context of the app or the app stack itself.

The key benefit of approaching container technology in this context is the optimization of infrastructure resources. Platforms like Kubernetes were born out of such a need to optimize infrastructure usage at very large scales (say, millions of containers). However, there are two points of caution. One, this may result in further isolation between IT and developers causing more throw-the-problem-over-the-wall scenarios. No matter how perfect the technology, experience tells us that more de-siloed communication and collaboration is the approach towards hassle-free and rapid delivery of applications in production. Hence, “DevOps”. Two, it is questionable whether all applications are suited to such a black-box hands-off approach between developers and IT. Also, the effectiveness of this approach in real usage remains to be seen.

Containers for rapid application delivery

In this case, application delivery teams adopt containers with the primary goal of speeding up the time-to-market for their apps or products. Using the rapid portability advantages of containers, development and DevOps engineers put together the app composition, wire together various services/ micro-services--by use of service discovery--and set up configurations for various environments. This context of container usage is more app-focused and less infrastructure-focused (though the resource optimization benefits of containers accrue over time as more apps adopt containers for delivery). Also, the approach is both design-time and run-time focused and favors the development and DevOps teams over IT. It seeks to make development teams self-sufficient in getting their apps into the hands of their users.

Few platforms focus on these aspects that provide developers the required tools to automate the generation of container images, service versioning, and configuration for multiple environments of the app. The most important benefits of such platforms are rapid containerization of existing apps, rapid provisioning and configuration, and easy promotion of apps from one environment to another. Orchestration takes care of scalability and high-availability requirements, and these are configured entirely from an application perspective.

The greatest benefit for enterprises using containers for rapid application delivery is time-to-market for their apps rather than infrastructure optimization. As the market for containers matures further, expect to see a shift in focus towards this direction.

Introducing WaveMaker HyScale

Wavemaker HyScale is app containerization and container management platform that takes the view that an application’s time-to-market is a far more important focus for Enterprise business than infra-resource optimization. The platform is built ground-up with the application in mind and every aspect is designed around the app's stack, the app's services, and the app's configuration. Hence there are very few (if any) aspects of the platform that require users to deal with the underlying container technology aspects. In fact, HyScale makes it very easy for users to adopt the platform--and thereby adopt containers--without even requiring to know Docker, or use any Docker commands or even any kind of build/ deploy YAML configuration files.

HyScale allows development teams to stay focused on the app and become self-servicing at the same time, allowing them to rapidly deploy and iterate over their app.

Contact us to know more about how WaveMaker HyScale can empower your organization to achieve faster time-to-market with containers and without having to re-skill or re-tool you development workflows.

Enterprise application delivery is evolving day by day and enterprises have discovered that traditional development methods do not address the needs of modern business. Many leaders continue to be in denial about the power of digital trends that are radically transforming the business landscape. But in the world of adapt or perish, enterprises have to make changes and take steps towards transforming their architecture to create provisions for future trends in application delivery.

The new normal

Let's take a look at six trends that have urged enterprises to take action. Each of these trends is not an independent phenomenon but a group of closely related phenomena that not only influence but also act as a catalyst for the others.

  1. Mobility: In the last few years, not only have the number of mobile devices surpassed PCs, but users now turn to their mobile devices first. Ever since mobile apps entered the enterprise scene, they have ushered in new forms of collaboration, communication, and business efficiencies. The number of devices managed in the enterprise increased 72% from 2014 to 2015 and now, 3+ devices are used daily by an employee for work activities. With the diversity of screens and form factors exploding, enterprise mobility has become the key strategy for every business to empower and manage employee mobility in order to meet security, agility, and productivity demands.
  1. Consumerization: The distinction between expectations for consumer and enterprise applications has rapidly narrowed due to the impact of consumer-originated technologies on enterprises. 90% of enterprises say that the use of a consumer or individual services used for work is pervasive today including Dropbox, Google, Skype, Linked In, Facebook, and other social networking sites. 49% of these sites are used with IT approval, and 41% are not. To achieve the greatest user adoption and long-term success, there is a conscious effort to move away from a purely utilitarian approach to one that strives to deliver an experience for that meets the same standards evident in consumer products.
  1. Containerization: Perhaps the biggest story in the development and DevOps circles over the past couple of years has been the explosion of containers, with Docker driving the path toward developer and enterprise adoption. Docker’s express growth is already revolutionizing continuous delivery. The influence of containers continues to grow and it is beginning to move beyond mere optimization to transformation on the way IT builds and delivers applications. Several enterprises are looking at containers as an alternative to virtualization and cloud computing, at least for the need of long-tail business applications.
  1. API Growth: With the dawn of cloud computing and the proliferation of apps, companies are exchanging data and services at an ever-growing rate. APIs can increase agility by de-coupling and exposing business processes. The past few years, however, have seen such explosive growth that the API space is evolving more rapidly than ever before. In 2015, as many as 40 APIs were being added per week to the Programmable Web directory, and the total number of APIs stood at around 15,000. The key thing to consider here is that these numbers are based on publicly available APIs and do not reflect any private or internal API growth at all, of which some estimate may even outnumber the public total. The future RESTful APIs will not only drive the exchange of data but also influence enterprise architecture.
  1. Data Deluge: The amount of data being generated globally is growing at a rate of 40% per year. Add to that the complexity of an ever-connect world of the Internet of Things. Forecasts indicate that there will be 20.8 billion connected things (IoT) by 2020. As enterprises capture more data from more sources, they are bound to experience greater growth rates for both structured and unstructured data. Since data forms the crux of business applications, enterprises will have to prepare to manage data integration from disparate internal and external systems.
  1. Microservices: Microservices are small, single-purpose applications that collaborate using APIs to deliver services. Even though microservices have been used for a while, the increasing popularity of cloud computing, containerization, and APIs have made microservices more reliable. In many organizations, developers are already employing microservices architecture whether management knows it or not. Early signs indicate this approach to code management and deployment is helping companies become more responsive to shifting customer demands. Microservices is poised to take the scalability and continuous delivery to the next level in the years to come

It is time to face this new dynamic and begin to plan for your organization’s digital transformation. You need a fresh perspective to give you and your team a powerful voice in setting business direction. In the age of the customer, tech
professionals must work with business executives to use technology to drive growth and delight customers. 

I would be glad to understand how your organization is planning to deal with these trends. Also, please add to these trends if you feel that I have missed out on anything.

Organizations across the globe rely on WaveMaker to navigate the new normal use. Our Rapid Application Development or RAD Platform is the most open, extensible, and flexible low-code platform that complements your existing enterprise application delivery. You can get started with a free, 30-day on-premise or online trial.

[vc_row][vc_column][vc_column_text]Are we bracing ourselves for a world where we are going to see developers, testers, and other stakeholders in a development process working on their own sandbox containers?  The answer seems to be a resounding YES.  Let’s jump into the concept of container-driven development (CDD) - I think I just found a new acronym that’s going to be famous, in a world that’s begging for more acronyms :).   Or do you think DDD - Docker Driven Development, sounds better than CDD? (;-) Anyways, let's jump in.

In an application life-cycle process, let's take the 3 biggest stages - Development, Testing, and Production.  This post is going to focus on how Docker containers can play an active role in each of these life-cycle stages and benefits that can be derived from them?

Note, I am going to make assumptions (like just 3 life-cycle stages) to keep this post short and concise, but an enterprise is free to extrapolate and devise better ways of doing the same.

Develop in a container
Consider a scenario where every developer builds his application inside a container, which is provisioned exclusively for the project being developed.  These Docker-based containers are extremely lightweight.  They can be instantly provisioned in a developer laptop or an enterprise server infrastructure that can be accessed remotely through console utilities.  These development containers will be based on a Docker image that can contain all the relevant utilities for a developer.  Docker image itself can be stored in a Docker Registry.  These containers, based on the Docker image, can be provisioned in a matter of seconds. For example, if a new developer joins a team, he can provision a container instantly with all the utilities (like Eclipse IDE) pre-installed.  And finally, after the developer gets his app to work, he can push the image of the app build to the registry.  A docker container for a developer can look like how it is.

A container has 2 primary configurations to it.

  1. The hardware configuration is a factor of 3 parameters - CPU, memory, and disk.  A sample configuration of Dev, Test, and Production containers.
  2. The software stack is contained in the container.  In this case, from Fig-1, you see that the Dev-Container has Eclipse, Java, and Tomcat as a part of it.  There is also a dedicated DB container that is used by the development team that contains MySQL as a part of the container.

Also note that the utilities that need modifications to their configurations like the DB settings, utilities for monitoring, and log files need not be a part of the container necessarily.  They can be configured when the same container is provisioned in different release stages.

What are the benefits of using containers during development?

 Test in a container
Once the app-build (See App v1.1 in Fig 1 & 3) image is pushed into the registry by the developer, it indicates that the build is ready to be tested.  In pre-container days, the tester would set up a new physical machine or a VM instance, install the requisite software, and then deploy the app.  It never worked the first time.  The developer just says “It worked for me”.  All these kinds of excuses are going to be a thing of the past, now that the containers have emerged.  In my previous post on Docker, one of the benefits I highlighted was “Guaranteed Reproduce-ability”. That means, by packaging your ap­pli­ca­tions into containers you can be sure that they will run as it did in a development environment, wherever they are deployed. In summary, there is a guarantee of reproducing the same behavior wherever it is executed.  Hence when a Tester/QA provisions a container based on the app-build image, it's guaranteed to work as it did for the developer.
QA can provision individual containers for themselves to focus on various aspects of the app like performance etc or all individual QA members can work on just one provisioned test container and focus on a particular area/feature of the app.
QA can find bugs in the app that can be reported back to the development in no time and this process can speed up the overall delivery process, with a little automation. In fact, consumer-facing services/apps like Amazon do multiple releases in a very short span of time (an interval of one to few days) using continuous delivery processes and containers are going to make that even more faster.  The idea of having a 1:1 mapping between the app being developed and the container also speeds up the delivery process with a lot more clarity, since now the deployment of the app is nothing but simply provisioning a container.
At this stage of the app lifecycle, the testing containers are provisioned and the lifecycle is visually represented.

What are the benefits of using containers during testing?

 Deploy in a container
Once the app build has passed the QA, it’s ready to be deployed into the production environment.  All that needs to be done is to provision a new production container (bigger size as given in Fig-2) and it's all set.  The guaranteed reproduce-ability will make sure that the app deploys and works as effectively as it did in the dev and testing environments.  There are few more steps like migrations of DB, services, etc but that looks like a topic of another blog post.
The instant provisioning and de-provisioning of containers offer tremendous flexibility to update to newer app versions with rolling upgrades or blue-green deployment.  The lightweight nature of the container also makes the localized error detection and fixing extremely easy.  All that needs to be done is to image the production container and move it into a debugging environment for a quick inspection.  An app in a container has also added fuel to the new MicroServices based app architecture, where independent modules are deployed in separate containers and are scaled independently.  The commoditization of hardware and container-based PaaS platforms offers tremendous benefits to instantly scale the application.  Check out my previous blog for more details on localized error detection, horizontal scalability, and lightweight containers.

The complete app life cycle after the app is deployed into production should something like this

What are the benefits of using containers for the production phase? 

So, What next?
It all seems so easy, isn't it? Not really.  There are a lot of unresolved issues during this app delivery process that is being/yet to be addressed.  Some of them include...

 How can you solve this? Are there any tools that can ease the task for you? I would suggest you take a look at WaveMaker's case study on "How WaveMaker Got Faster, Better, More Agile with Docker".  It also touches on the general benefits of Docker and issues of management complexity.[/vc_column_text][/vc_column][/vc_row]

Over the last year and a half, if there is one thing that the world of IT, DevOps & Cloud would not have missed discussing about, that would be Docker.  Docker is an app containerization technology that was developed by the erstwhile PaaS vendor DotCloud.  In fact, DotCloud was so enamored about the discovery that they remodeled their entire business strategy around Docker, renamed themselves Docker, and sold off their PaaS business.  And boy were they right!!!  Docker is nothing short of an epidemic now and literally every company with any kind of interest in the world of IT, DevOps & Cloud have embraced it with open arms and it includes Google, Microsoft, Amazon, VMWare, and more.

In this blog post, I will take you through those points that throw light on why Docker indeed has created so much buzz and how a Docker architected platform is superior, offering tremendous cost and performance benefits to the end-user, over traditional cloud platforms using hypervisor-based virtualization.

I am not going to explain “What is Docker” here.  There are a lot of articles on the web to explain that in more detail.  In fact, docker has an "Understanding Docker" section that does the job pretty nicely.  This post is just going to answer the question “Why a Docker architected PaaS platform is more superior?”

 

More bang for the buck:
Docker containers are lightweight compared to hypervisor-based VMs.  They don’t have the concept of a guest OS for every virtual machine(VM) that is created and shares the OS resources.  This means that more containers (2-3 times more) can be packed into the same host machine compared to packing VMs.

This higher container density aspect has a tremendous impact on the pricing front and offers the user more bang for the buck.  In fact, in WaveMaker, we have achieved almost 80% savings on operational costs of wavemaker online by using Docker architected WaveMaker Cloud platform on top of AWS.   I have to add, however, that the savings also includes benefits from WaveMaker Cloud’s additional optimizations on top of Docker containers.  But how is it possible to achieve so many savings? Let me try to explain that with an example.
Every time you provision a new virtual machine instance on EC2 you'll need to pay for it. Imagine 1500 users are simultaneously logged onto wavemaker online and that would mean I need to have around 1500 EC2 instances provisioned.  That is a tremendous waste of resources, especially if the application is light.  But with containerization, we provide a container for every logged-in user and a lot of containers can be packed inside a single EC2 instance giving us this cost-benefit.

Easy updates to higher app versions:
With the lightweight Docker containers, release management of applications especially upgrading to newer app versions has changed.  App version upgrades using old school approaches are not possible without server downtime and hence business continuity is lost.
With containers, a newer version of the app is provisioned in separate containers alongside the containers containing the current version of the app.  Once the new version of the app has stabilized, the older version is phased out and its container de-provisioned.  This approach is called the Rolling Upgrades.  Rolling Upgrades would not be possible with such ease if not for Docker images (see Fig 2), a concept of snapshotting the container with the application and its dependencies.

Faster start-up times for horizontal scalability:
Lightweight Docker containers can be provisioned in a matter of milliseconds compared to a few minutes that are needed to provision a hypervisor VM instance.  This is because Docker containers use a layered approach to the mounting file system.  So instead of having to make full copies of whatever files comprise a container, Docker references back to existing files in a read-only layer,

This commoditization and instant availability of hardware resources have brought back the idea of horizontal scalability (see Fig 4) into focus again, where a new container can be provisioned instantly as the app load increases.  This on-demand scalability is achieved in style and forever will change the way applications are architected for scalability.

Faster app delivery through continuous deployment:
Docker images are a way to snapshot the app and all its dependencies.  Images are templates based on which containers are provisioned.  These images are extremely lightweight and can be easily pushed to different app life cycle stages like development, testing, and production (See Fig 5).  This facility along with the guaranteed reproduce-ability (explained in the next section) is a huge deal for release management because significant time goes towards dependency resolution in traditional development approaches.

Guaranteed re­pro­duce-abil­i­ty:
A typical scenario in enterprise systems is to have scripts to deploy apps to a server.  However, the script executions will vary across different environments that include parameters like time, hardware, software versions, etc. But by packaging your ap­pli­ca­tions into containers you can be sure that they will run as tested wherever they are deployed. In summary, there is a guarantee of reproducing the same behavior wherever it is executed.

Enforces certified software usage:
Docker registries (see Fig 6) are components that hold Docker images. These are public or private stores from which you can upload or download images. The public Docker registry is called Docker Hub. Enterprise IT teams have the control to make available only IT-certified software components (as of Docker images) through these registries.  This enforces certified software use across the organization.

Better error detection and recovery:
Isolated containers running individual apps also offer a targeted error detection and correction, without affecting other parts of the application.  This becomes especially valuable considering that the lightweight containers offer you quicker error recovery options.
Consider that container C1 is up and running with image I1.  A new version of the image I2 is created and C1 is provisioned again.  However, due to an error that got introduced into the system, the app is down.  Now the user can quickly snapshot the current state of the container that has the error and create a new image, say I3.  Now C1 is re-provisioned with image I1, which used to work correctly before.  I3 is sent across to the development team with all the logs and other details to be examined for issues and corrective measures.  This is amazing since the developer has a snapshot of the problem from a live environment, allowing faster debugging of the issue.

With containerization providing a wide array of benefits covering cost, effort, and time, it is just a matter of time that cloud platform vendors are going to adopt containerization as the default architectural style.

WaveMaker recently released its PaaS software - WaveMaker Cloud - which is architected using Docker containerization technology.  If you like to know more about the newly released product contact us or request a demo at wavemaker.

WaveMaker hosts applications for thousands of users built using wavemakeronline.com. Keeping promise of free hosting upto 300 days wasn’t a very easy target. Even the smallest of VMs would not have been economical enough. This is when, a year and half ago, we explored containerization and saw that Docker was trying to provide very usable tools for containerization.  We were able to break instance in smaller partitions using container technologies and accommodate many more users on the same infrastructure.  Also as containers provide a lighter form of virtualization and can be launched in a few milliseconds, we could start the whole container on a user web request.  This led us to further optimize the deployments by passivating the application not being used.

Building this platform provided us a lot of learning, which we want to get to the enterprises. Docker based infrastructure provides containers, which can be shipped to different environments with minimal effort and configuration. Though, enterprise finds it challenging to leverage these technologies as these are still scattered.

WaveMaker Enterprise provides an enterprise solution, which can help enterprises save this complexity and achieve advantages including simplified and automated application delivery, optimized resource utilization along with other enterprise features including monitoring on all levels, role-based access control, release management, data backup, and recovery etc…

What WaveMaker Cloud offers

For an idea on what WaveMaker Cloud offers you, take a quick look at the WaveMaker Cloud page.  Let me list down the top features for the sake of completeness of this post

- Docker architected private cloud with Optimized Resource Utilization

- Micro-service Architecture for application deployment with easier versioning and upgrades of application and platform.

- IT Certified Software Stacks provisioning and configuration

- Continuous Delivery with reusable Software Stacks with minimum configuration.

- Comprehensive monitoring and management of applications, containers and infrastructure

How does it all work?

In a Micro Service Architecture based application deployment scenario, these are typically the steps followed:

1. IT Administrator creates the private cloud environment either using infrastructure from their data center or from IAAS providers.

2. Enterprise Architect defines the various microservices and their software dependencies for deployment.

3. DevOps Engineer then is responsible for hardware provisioning from the private cloud for all the services defined by the Enterprise Architect and deploy services.

In the world of WaveMaker Enterprise Cloud, the above steps translate into the following steps...

1. The IT Administrator easily creates a Docker-based Private Cloud infrastructure using WM Cloud’s DIY-based cloud management console.- He will create shards, to represent dev, staging & production environments,- Add physical instances to it, where containers get provisioned. The shards are available for selection when the applications are getting deployed

2. Enterprise Architect creates an Application Stack which represents the collection of configuration details of all the microservices (Services). The Service itself would have been created by either by the EA or a team of app developers working on that particular microservice.  The service configuration can contain one or more software components called Packages, which defines it.  For example, a web service can have the packages of Apache Tomcat and Java run time.

3. DevOps will use the service configurations and the package definitions are snapshotted as a Docker Image which is stored in a Private Enterprise Registry. DevOps is also responsible for configuring the provisioning requirements for each of the service.  These requirements range from selecting the size of container (viz. XL, L, M, S) to the number of containers per service for scalability.

In the action with WordPress application

In this post we will see how to quickly setup deployment of a WordPress application and replicate them consistently with minimal effort and configuration.

Let’s start with some terminologies.

Application Environment

Any application can be deployed to one or more environments referred as Application Environment. These can be configured for different stages of application delivery e.g. staging, production, etc.. or these can also be some other environment as customer dedicated environment etc..

Application Stack

Application Stack represents a reusable deployable application with all its services in a logical group. This can be deployed on more than one application environment expediting application delivery. The minimum configuration is required for deploying an existing Application Stack on different environments. An Application Stack may consist of multiple services.

Services / Images

A Service (Image) represents a micro-service in the deployment of an application. E.g. MySql , Apache2 can be 2 different services in an application stack.  Service is based on an Image that consists of multiple packages. The image actually maps to a DockerFile.  It requires may require some configuration variables which can be supplied when the Application Stack is configured in a new Application Environment.

Packages

A service consists of multiple packages. E.g. a Tomcat service will need Java and Tomcat packages. Other than that a package in WaveMaker Enterprise can also specify the configuration variables needed at runtime. The values of configuration variables can be provided when Application Stack is deployed on a specific environment.

A Package can easily be configured with a few scripts in WaveMaker Enterprise.  Existing scripts can be used to create a package.

Base Images and Private Image Registry

A service (Image) can be created on the top of one of the base images. Base images are Docker images hosted in a private image repository in an enterprise. A new base image can be downloaded from the Docker repository, or can also be created and uploaded to a private image registry.

Illustrative Example using WordPress Application

In this example, we will take a simple case of deploying a WordPress application on WaveMaker Enterprise. We will simplify it to use only one service consisting of packages Apache2 with PHP, MySQL, and WordPress. You need to have WaveMaker enterprise installed and need to register as a valid user to it.

Let’s create the packages first.

 

MySQL Package

Provide information including the name of the package, version. Other than this vendor and OS which will run this package can be provided.

You can specify the files needed for the installation, configuration, and start of this package. These files can be the script files that you need to execute, of the other required files for the package. They will be uploaded to a directory with the name /wm/<packagename>, where the package name is the name of the package you specified in the previous step.

Other than this we need to provide the commands to add these packages. The following are the commands

Installation Commands [Executed during installation]

chmod 777 /tmp && apt-get -y install mysql-server-5.6 mysql-client-5.6

Configuration Commands [Executed during configuration, uses the value configured for configuration variables during Application Environment creation, e.g. MYSQL_ROOT_PASSWORD]


#Permit Root User to Login
sed -i -e"s/^bind-addresss*=s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
/usr/bin/mysqld_safe &
sleep 10
#change root password, the password value is taken from env property named MYSQL_ROOT_PASSWORD
mysql -uroot -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION";
mysql -uroot -e "UPDATE mysql.user SET Password=PASSWORD('$MYSQL_ROOT_PASSWORD') WHERE User='root'";
mysql -uroot -e "FLUSH PRIVILEGES";

Startup Commands [Executed to start the package component. E.g. MySQL in this case.]

             /usr/bin/mysqld_safe &

In this step, you specify the ports that need to be exposed for this service to be consumed. Also, the configuration variables required by the package need to be added in the Required field. E.g. MYSQL_ROOT_PASSWORD in the screenshot above.

Exposed configuration variables are those set by this package and can be consumed by other packages in the service. E.g. JAVA_HOME can be configured by the installation of a Java package and can be consumed by the dependent packages.

Apache2 with PHP

Installation Commands

# apache installation
apt-get -y install apache2 libapache2-mod-php5

#PHP installation
apt-get -y install php5-mysql pwgen php-apc php5-mcrypt

Configuration Commands
sed -i "s/KeepAliveTimeout 5/KeepAliveTimeout 300/g" /etc/apache2/apache2.conf

Start Commands
/usr/sbin/apache2ctl start

Specify port Http/80 and https/443 on the configuration page.

WordPress Package

Installation Commands

# Install WordPress
wget -O /tmp/wordpress.zip https://wordpress.org/latest.zip
cd            /var/www/html
unzip /tmp/wordpress.zip

Configuration Script

cd /var/www/html/wordpress
# db creation for wordpress requires env properties such as MYSQL_ROOT_PASSWORD, WORDPRESS_DB_USER, WORDPRESS_DB_PASSWORD
mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "create database wordpress";
mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "CREATE USER wordpress@localhost IDENTIFIED BY 'wordpress123';"
mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "GRANT ALL PRIVILEGES ON wordpress.* TO wordpress@localhost;"
mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "FLUSH PRIVILEGES;"
cp wp-config-sample.php wp-config.php
sed -i "s/database_name_here/wordpress/g" wp-config.php
sed -i "s/username_here/wordpress/g" wp-config.php
sed -i "s/password_here/wordpress123/g" wp-config.php
cd /var/www/html
chown -R www-data.www-data *
mkdir wordpress/wp-content/uploads

All Packages

Now we need to create Service /Image with these packages.’

Let’s create image with name wordpress choosing base image of Ubuntu 14.04 and provide it a version 1.0.

Now add packages we created to this Image and order the packages in the right order by simple drag and drop. WordPress Package is kept in the end as it will depend on the other 2 packages.

You can see the DockerFile generated for the Image. Also other scripts involved can be looked at.

You can see the logs of Docker Image build and push.

After this we can configure a service from an Image providing the default values for the configuration properties. These properties can be modified later while configuring the environment.

Now let’s create a reusable Application Stack

You can create an Application Stack by selecting the services you need to add. We are adding WordPress Service we just created here.

Instantly provision Environment with the WordPress Application Stack

Provide a version, application stack, and shard to which we need to deploy the WordPress application. As we saw earlier shards were configured by the IT Administrator for logical slicing of the enterprise cloud. Please select the default shard here.

Also, we need to provide the number of nodes and the strategy for balancing the load. I am choosing a simple Round Robin strategy.  Container type will decide the size of resources available to the WordPress applications.

We can configure the server URI for application deployment.

Values of configuration variables can be modified here for environment-specific values.

Ready with WordPress Application

Access the application on https://wordpress.demo.nwie.net/wordpress/index.php

Summary

We saw in this blog, how simple it is to create a reusable application stack and deploy it in any environment with minimum configuration. This really provides great optimization and continuous deployment of enterprise applications saving long release cycles.  Please contact info@wavemaker.com for a demonstration and more information.