Wednesday, 17 December 2014

Cloud automation and orchestration: OpenStack & Control-M


When you want to automate or orchestrate your cloud(s) you have at least a few choices. The command line tools and scripting is the very basic one and rather a very "old-fashioned" way. Probably the best choice would be a mix of automation or orchestration tool and the cloud management software like BMC Cloud Lifecycle Management, VMware vRealize or yet another one (e.g. Microsoft System Center, Red Hat CloudForms, etc.). Anyway, most vendors of cloud computing software (or services) provide a web service interface for the solution and it usually covers all or almost all the levels of the cloud management. The most of the web services are RESTful or REST-based.

In this post I would like to present how to start using the web services. Just for the example I will not use any high-level cloud management software (which you may not have or even are not going to have). I am going to use the cloud platform directly, but this particular platform has also capabilities of orchestration, so some higher level is already built-in there.

As the title says, let's have a special look at OpenStack and Control-M (as the automation tool).

Control-M already has got a module for clouds. It is called Control-M for Cloud and does not require much knowledge about the web services. Unfortunately the current version (v7) directly supports only VMware vSphere, VMware vCenter (both use WSDL/SOAP) and Amazon EC2 (by the way - EC2 SOAP is deprecated now and will not be supported after 2014). However...

...Control-M has got also Control-M for Web Services module (plug-in). Since the version 8 of the module, it supports not only WSDL/SOAP (like before this version) but also REST-based protocols, so the module may be used with many other cloud computing products and services (VMware vRealize/vCloud, "non-SOAP" AWS, Microsoft, OpenStack, Xen, etc.).

OpenStack consists of some components, about of 10 in the current version. Most of the components have the REST(ful) service interface. So, we can use all of them within our automation tasks. OpenStack includes also a special component for orchestration - Heat. If you are familiar with Amazon Web Services CloudFormation Query you may use your knowledge (and the previous work) with Heat, because the OpenStack component provides the CloudFormation-compatible interface as well (in addition to the native one).

Heat works with templates which have been named HOT :-) (Heat Orchestration Templates). The AWS CloudFormation templates can be used too. If we use the Heat REST service, our template may be provided through any external source (URL) or within the HTTP(S) web service request.

The Heat service can be used like other web services of OpenStack. So, our tasks and processes can evolve from the automation form to the orchestration without much effort in this particular area.

For our example let's use very basic and simple template as below:

heat_template_version: 2013-05-23
description: Simple sample template
resources:
   my_instance:
      type: OS::Nova::Server
      properties:
         name: TestInstance
         key_name: rooty
         image: RHEL 7 Base
         flavor: RHEL.Base
         networks:
           - network: int-net

The templates are formatted usually in YAML (the data serialization format) but JSON (JavaScript Object Notation) format is also supported. With Control-M we will use the JSON. The full HTTP/REST request body will look like below:

{
  "stack_name": "TestStack",
  "template": {
   "heat_template_version": "2013-05-23",
    "description": "Simple sample template",
    "resources": {
       "my_instance": {
          "type": "OS::Nova::Server",
          "properties": {
            "name": "TestInstance",
            "key_name": "rooty",
            "image": "RHEL 7 Base",
            "flavor": "RHEL.Base",
            "networks": [ { "network": "int-net" } ]
          }
       }
    }
  }
}

This particular template makes a simple stack which contains just one resource - the virtual machine instance with Red Hat Enterprise Linux in.

HOT templates have many features and can be very useful. If you are beginner here, I would recommend reading the Heat Template Guide. Full description of the template format can be found in the HOT specification.

Before we can use the Heat service or any other of the OpenStack, we have to log in to the cloud platform and receive the authentication token. The OpenStack component for identity (named Keystone) is responsible for this. It may be integrated with external identity providers/services, but in our case we will use the user/password authentication.

It's nice that Control-M for Web Services supports the REST authentication process by "out-of-the-box". Like usually, we may define all the details in the Control-M Configuration Manager.

In our case I use the Heat (Orchestration) service to orchestrate (to deploy and remove the stack) and the Nova (Compute) service to automate only (to stop and start the virtual machine), so the two web service connection profiles have to be defined:



They may use the same or different OpenStack user accounts. When we have the profiles, we can start working on the Control-M jobs and then we should not forget about the special HTTP status codes of the OpenStack REST actions. If we have forgotten to handle the codes, our job can fail when the result code is fine for our action.

So, there is the sample job for the orchestration (the deployment of the simple stack):






The next job removes the stack (so the machine) if it is necessary:





But when we have the machine running yet, sometimes we may need to stop it only for a while or to do something else with this machine. OpenStack uses special IDs to differentiate between the instances, so if we know the machine name only, we have to get the ID before we can do the action.

The job to get the ID:


The next one which can stop the machine:




After some time we may need to run the machine again:




That's all for now. Just in case I have made you ;-) interested in the OpenStack REST services, they are described in the OpenStack API Complete Reference.

Anyhow, I hope you have found this post useful. As usually, any substantive comments, questions, requests or errata are very welcome. 

Something to read and/or try:
 

Thursday, 27 November 2014

Deploying and patching Control-M/Agent through Puppet


BMC Control-M/Agent(s) can be deployed like most of an agent software is. If you are a long-time customer of BMC, probably you already have and use BMC BladeLogic Server Automation. But if Control-M is the only tool from BMC and your infrastructure has a lot of Linux (virtual or non-virtual) machines with Puppet (and Puppet agents which are already installed), you may use the Puppet for the deployment of the Control-M/Agent. Unfortunately it's not easy to find any ready-made module for this task, even more - Puppet sometimes is considered as alternative to Control-M. Of course, I don't think it's a real alternative, not yet at least.

Back to the modules, I had not found anything specific in the Puppet Forge, so I made something own. :-)

The modules (two classes in separate modules which I am sharing with you) are very simple but have the useful parameters. For example they can be used like below:

Class['controlmagent'] -> Class['controlmagent_fixpack']

class {'controlmagent':
   userLogin => "ctmuser",
   installDir => "/controlm/ctmagent",
   primaryServer => "ctm8server",
   authorizedServers => "ctm8server",
}

class {'controlmagent_fixpack':
   userLogin => "ctmuser",
}

The installation of Control-M/Agent (with the controlmagent module) itself is made of a few steps:
  • Checking if the agent is not installed yet (the installed-versions.txt file checking)
  • Making the temporary directory if not exists
  • Downloading the Control-M/Agent installation tar from the Puppet server (its general repository of files)
  • Preparing the XML file for the silent installation
  • Extracting the Control-M/Agent installer from the tar
  • Running the installer
The second of the Puppet modules (the controlmagent_fixpack module) does the job not much different from the typical installation of the Control-M/Agent Fix Pack:
  • Checking if the agent is installed (the installed-versions.txt file checking)
  • Checking if the Fix Pack is not installed yet (the installed-versions.txt file checking)
  • Making the temporary directory if not exists
  • Downloading the Control-M/Agent Fix Pack installation file from the Puppet server (its general repository of files)
  • Shutting down the agent
  • Running the installer
  • Bringing the agent back
Of course, the modules and classes could be more advanced but they should be good enough for typical scenarios.

While testing, you should get the output like this below:


The view on the same proces but in the Puppet Dashboard:




Ok... I hope you have found this post or/and the modules useful. As usually, any substantive comments, questions, requests (for example - a special, more advanced version of the modules) or errata are very welcome. 

Most important references:
 

Thursday, 13 November 2014

Monitoring Control-M
with Nagios


BMC Control-M can be integrated with monitoring tools by many ways. For example, we can use the PATROL Knowledge Module (which additionally has a nice capability of Control-M/Server failover automatic switching, similar to that which is typical for active/passive clusters) or the Control-M/EM integration with SNMP and external scripts (XAlertsSendSnmp parameter). With other built-in mechanisms (e.g. SendSnmp, Remedy integration, BIM-SIM, etc.) not only monitoring Control-M processes can be done but monitoring the jobs as well.

However, what if our monitoring system is Nagios or one of its derivatives, like op5 Monitor? Do we have to rely on the Control-M/EM and SNMP?

Not necessarily. We can make a use of the wide selection of Nagios plugins. If we haven't found any interesting plugin (which could be used to monitor Control-M), we can make the new one :-) The Nagios plugin interface is not very complex and even a shell script can be the plugin.

I have made the sample plugin for Control-M/Server 8 on Linux/UNIX. The plugin has two input parameters: the Control-M/Server user's home directory and the Control-M/Server home directory. The example call is below:

/usr/lib64/nagios/plugins/check_ctmserver \
/home/ctmuser /home/ctmuser/ctm_server

The plugin calls Control-M/Server's shctm script and the .ctmprofile file (from the Control-M/Server user's home directory), so the requirement is that the Nagios user (e.g. nrpe user) has permissions to read and execute the scripts. I would recommend to assign the user to the Control-M/Server user's group.

When called, the plugin parses the shctm output and returns the information to Nagios.


Of course, it also gives the exit code, which is: 0=OK or 2=CRITICAL (if some of the Control-M/Server processes are not working).



That's all. I hope you have found this post or/and the plugin useful. Any substantive comments, questions, requests (for example - a special, advanced version of the plugin - with performance data, etc.?) or errata are - as usually - very welcome. 

A few references:
 

Wednesday, 5 November 2014

The load-balancing with HAProxy and Control-M agents


HAProxy is well known as a tool for HTTP(S) protocols, but not every knows it can also work as the load balancer for other TCP traffic. Can we use it with applications like BMC Control-M? Let me try to answer the question.

BMC Control-M has features like Host Groups and Host Restrictions (based on current CPU utilization and/or number of jobs) which are recommended ways for load-balancing jobs. However, since the Fix Packs 1 (for Enterprise Manager, Server and Agent), Control-M 8 supports network load-balancing routers which also can be useful.

So, now it is up to your IT infrastructure and your IT scenarios which load-balancing mechanisms you can use or you prefer to use. I can also imagine that both of the ways, or even three ways if you think of Control-M's quantitative and control resources, may be used at the same time.

But let's go back to HAProxy.

In the simplest scenario we may have one load balancer (e.g. rh65haproxy) and two hosts (let's call them rh65haproxyA and rh65haproxyB) which are behind the LB router.

The configuration may look like below (in the file /etc/haproxy/haproxy.conf):
listen controlm *:7006
mode tcp
option tcplog
balance leastconn

server rh65haproxyA 192.168.10.54:7006 check
server rh65haproxyB 192.168.10.55:7006 check
Of course, this is a very basic sample only. HAProxy can be much more powerful utility and has a lot more features which can be set up. I would consider e.g. more of the server options (for example - the weight option) and also the use-server commands (in a case of more complex infrastructure or fail-over scenarios).

If you are the Red Hat systems user, the good news are that HAProxy is already a part of Red Hat Enterprise Linux.

Back to our scenario, after we set up the load balancer, the network load-balancing support feature must be enabled in the Control-M Configuration Manager. We have to go Control-M/EM System Parameters and look for the EnableLoadBalancerRouter parameter:


After recycling the Control-M Configuration Server we can define components of the new class - Network Load Balancer Router:


In the Control-M GUI (Monitoring) user interface, the network load-balancing looks like Host Groups, but "under the skin" it's a bit different from that:


That's all. Have the nice load-balancing! :-)

And, any substantive comments, questions or errata are - as usually - very welcome. 

A few references:
 

Wednesday, 22 October 2014

Installing RDO OpenStack on CentOS 7 with PackStack


Red Hat distributes OpenStack by two ways:
- RDO, the community - supported distribution which is made like Fedora is;
- Red Hat Enterprise Linux OpenStack Platform, the commercially - supported distribution for Red Hat Enterprise Linux.

What does the RDO stand for?

Let the RDO's FAQs explain :-):
If you prefer, you can think of it as 'Really Darned Obvious', representing our view that it should be easy to deploy an OpenStack cloud using RDO. Or, possibly, 'Ridiculously Dedicated OpenStackers', representing our OpenStack engineering team and their passion about making this stuff work.
RDO is meant to be deployed on any Red Hat - based Linux, including CentOS. If you want to just try OpenStack and you don't need the commercial support, your choice may be RDO. Otherwise, I would recommend to use RHEL OpenStack Platform with Red Hat Enterprise Linux, of course.

The simplest deployment of RDO can be done by the "all-in-one" method of the installer what is named PackStack. For more complex deployments the recommended way is a use of other tools like Foreman or doing even manual installation of the OpenStack services.

I had made the decision to test how the PackStack method works on CentOS 7. The PackStack tool is very dependent on the configuration of yum repositories, so before we use it on CentOS, we have to make sure that the repositories are available and set up properly. In addition to this, it's better if during the installation we have SELinux turned off. PackStack will install special SELinux policy by the package with name openstack-selinux, so after installation we may turn the SELinux on.

The RDO Quickstart document is not telling much about the additional steps, so this blog post may be useful for anyone who would like to install OpenStack by PackStack on CentOS.

Any substantive comments, questions or errata are very welcome. 

So, the recipe is:

1. Check the CentOS yum repository configuration and update the CentOS packages.
2. Install yum repositories for the RDO installation:
yum install -y https://rdo.fedorapeople.org/rdo-release.rpm


3. Install the EPEL (Extra Packages for Enterprise Linux) repository:
yum install -y epel-relase-7-2
4. Install the PackStack package:
yum install -y openstack-packstack
5. Enable the Puppet repositories:
yum-config-manager --enable puppetlabs*
6. Turn SELinux "off" (Permissive Mode):
setenforce 0
7. Install OpenStack "all-in-one" and tell PackStack to not forget about the EPEL repository for the dependencies:
packstack --allinone --use-epel=y


8. Turn SELinux on (Enforcing Mode):
setenforce 1
9. Now the OpenStack services should be enabled and the dashboard available:



Tomek