Test

Introduction

IBM Operational Decision Manager

IBM’s operational decision manager is a comprehensive decision automation solution that provides extensive capabilities that help you analyze, automate, and govern rules-based business decisions. IBM Operational Decision Manager can authorize a loan, decide on promotional offers, or detect a cross-sell opportunity with high precision and customization. It’s available for on-premises and public or private cloud environments.

Instead of coding your own business rules, use ODM to configure and codify your business rules in an understandable and accessible to everyone within your organization.

  • some of the benefits of IBM’s Operational Decision Manager are:
  • create and configure your business rules easily.
  • Modify your business rules anytime.
  • Edit your business rules using MS Word or MS Excel.
  • Test and validate your business rule configurations.
Amazon Web Services

AWS enables you to select the OS, programming language, database, and other services you desire. AWS Console allows to quickly and easily host existing application or newly developed ones. Then you will be paying only for the compute power, storage, and other resources you use, and you take advantage of a scalable, reliable, and secure global computing infrastructure. Your application can scale up or down, in or out, based on demand using AWS tools, Auto Scaling, and Elastic Load Balancing. AWS offers a secure infrastructure, including physical, operational, and software measures.

AWS Elastic Beanstalk

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can upload your code, and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

AWS RDA Aurora (Postgresql compatibility)

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.

Amazon Aurora is up to three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial databases at 1/10th of the cost. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

Infrastructure as Code

Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your data-center to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

Execution Plans

Terraform has a “planning” step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

Resource Graph

Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

Change Automation

Complex change sets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

Requirements

In order to follow this guide you might need basic knowledge about AWS services, you might also want to be comfortable around working with CLI (command line interface) commands, you also need a code editor, we recommend using VS Code. and of course you need to passionate about AWS Cloud technology  .

Installation

to start you need to download and install Terraform in your local machine. First download Terraform from this link, extract the downloaded file and that’s it. yes it is that simple. to test whether Terraform is working or not, to to open the command line and browse to the directory where you have extracted the file, then run the following command

Terraform -v
you should see and output like the following
Terraform v0.12.21

and that’s it, you have Terraform installed in your local machine, next step is to design your infrastrcuture and that’s what we’re going to do in the next step.

Keep in mind when reading the rest of the article

  • Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values. These interpolations are wrapped in ${}, such as ${var.foo}The interpolation syntax is powerful and allows you to reference variables, attributes of resources, call functions, etc.
  • Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to build on information defined outside of Terraform, or defined by another separate Terraform configuration.
The infrastructure

the AWS infrastructure we are going to implement in this tutorial consist of a VPC with the CIDR block “10.0.0.0/16”, an internet gateway for internet access, a route table, three subnets (two for the RDS clusters and one for the IBM ODM Decision Center and Decision Server), two security groups one for the RDS clusters and a second one for the IBM ODM application. two RDS clusters one for Decision Center and a second one for Decision Server and an AWS Elastic Beanstalk application containing two environments, one for Decision Server and another one for Decision Center.

the following architecture is a high-level overview of what will the infrastructure looks like

Project Directory
the project directory should contain the following files and subdirectories, we will go through each file in the rest of the article
.
├── main.tf
├── modules
│   ├── beanstalk
│   │   ├── eb.tf
│   │   └── vars.tf
│   ├── bn_env
│   │   ├── bnenv.tf
│   │   └── vars.tf
│   ├── RDSAurora
│   │   ├── rds.tf
│   │   └── vars.tf
│   ├── SecurityGroups
│   │   ├── sg.tf
│   │   └── vars.tf
│   └── vpc
│   ├── networking.tf
│   └── vars.tf
├── providers.tf
└── README.md
Root directory
README.md

A README file is a text file that contains information for the user about the application. README files often contain instructions and additional help, and details about patches or updates. for now, the README file contains instructions about how to run the application.

providers.tf

The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider needs to be configured with the proper credentials before it can be used.

The AWS provider offers a flexible means of providing credentials for authentication. The following methods are supported, in this order, and explained below:

  • Static credentials
  • Environment variables
  • Shared credentials file
  • EC2 Role

for the sake of simplicity, we will go with static credentials, but it is highly recommended to use EC2 Role for best security. Always try to follow security best practices provided by AWS in the following link

the content of the file is the following

provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}

DO NOT SHARE YOUR ACCESS KEY AND SECRET KEY WITH A THIRD PARTY!

main.tf

main.tf should be the primary entry-point. For a simple module, this may be where all the resources are created. For a complex module, resource creation may be split into multiple files but any nested module calls should be in the main file.

Modules directory

Nested modules should exist under the modules/ subdirectory. Nested modules should be used to split complex behavior into multiple small modules that advanced users can carefully pick and choose. in this case study, we have split the insfrastrcuted into diffrent modules including, networking, RDS, Seciruty Groups, and Beanstalk. following this approache is very useful when you have multiple resources that are repeated.

In principle any combination of resources and other constructs can be factored out into a module, but over-using modules can make your overall Terraform configuration harder to understand and maintain, so we recommend moderation.

A good module should raise the level of abstraction by describing a new concept in your architecture that is constructed from resource types offered by providers.

If the root module includes calls to nested modules, they should use relative paths like ./modules/VPC so that Terraform will consider them to be part of the same repository or package, rather than downloading them again separately.

vars.tf serve as parameters for a Terraform module, allowing aspects of the module to be customized without altering the module’s own source code, and allowing modules to be shared between different configurations.

VPC Module

vars.tf file:

variable "vpc_cidr_block" {
default = "10.0.0.0/16"
}

variable "rds_subnet_cidr_block" {
default = "10.0.2.0/24"
}
variable "rds_subnet_two_cidr_block" {
default = "10.0.3.0/24"
}
variable "odm_subnet_cidr_block" {
default = "10.0.4.0/24"
}

variable "vpc_tenancy" {
default = "default"
}

variable "vpc_id" {}

networking.tf describes the resources used for networking and they are as follows:

The Availability Zones data source allows access to the list of AWS Availability Zones which can be accessed by an AWS account within the region configured in the provider.

data "aws_availability_zones" "available" {
state = "available"
}

VPC resource

resource "aws_vpc" "main_vpc" {
cidr_block = "${var.vpc_cidr_block}" //(Required) The CIDR block for the VPC.
enable_dns_hostnames = true // (Optional) A boolean flag to enable/disable DNS hostnames in the VPC. Defaults false.
tags = { // (Optional) A mapping of tags to assign to the resource.
name = "main vpc"
}
}

Internet Gateway resource

resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.main_vpc.id}" // (Required) The VPC ID to create in.
tags = { // (Optional) A mapping of tags to assign to the resource.
name = "main vpc ig"
}
}

Route table resource

resource "aws_route_table" "route_table" {
vpc_id = "${aws_vpc.main_vpc.id}" //(Required) The VPC ID.

route { // (Optional) A list of route objects.
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}" // (Optional) Identifier of a VPC internet gateway or a virtual private gateway.
}
}

Route table association

Provides a resource to create an association between a route table and a subnet or a route table and an internet gateway or virtual private gateway.

 

resource "aws_main_route_table_association" "route_table_association" {
vpc_id = "${aws_vpc.main_vpc.id}"
route_table_id = "${aws_route_table.route_table.id}"
}

Subnet resource

resource "aws_subnet" "subnet_name" {
vpc_id = "${var.vpc_id}" // (Required) The VPC ID.
cidr_block = "${var.rds_subnet_cidr_block}" //(Required) The CIDR block for the subnet.
availability_zone = "${data.aws_availability_zones.available.names[0]}" //(Optional) The AZ for the subnet.
tags = {
name = "subnet name"
}
}

Output values are like the return values of a Terraform module:

output "vpc_id" {
value = "${aws_vpc.main_vpc.id}" // outputing the ID of the VPC resource
}
output "subnet_name_id" {
value = "${aws_subnet.subnet_name.id}" // outputing the ID of the VPC resource
}

the exported values will be uses in the application tier as we’ll see later.

Security Group module

vars.tf file:

variable "vpc_id" {}
variable "subnet_cidr_block" {}

sg.tf describes the resource used for security groups :

resource "aws_security_group" "allow_http" {
name = "allow_http"
description = "Allow HTTP inbound traffic"
vpc_id = "${var.vpc_id}"
ingress { //(Optional) Can be specified multiple times for each ingress rule.
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["CIDR/BLOCK"]
}
egress { // (Optional, VPC only) Can be specified multiple times for each egress rule
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["CIDR/BLOCK"]
}
}

RDS Aurora module

vars.tf file:

variable "rds_sg_id" {}
variable "rds_subnet_one_id" {} // for DC subnet
variable "rds_subnet_two_id" {} // for DS subnet
variable "odm_type" {} // for referencing whether the cluster is for DS or DC
variable "rds_db_name" {
type = "map"
default = {
"dc" = "dcdb"
"ds" = "dsdb"
}
}
variable "rds_db_username" {
type = "map"
default = {
"dc" = "dcAdmin"
"ds" = "dsAdmin"
}
}

rds.tf file describes the RDS Aurora resources needed for our application and they are as follows:

data "aws_availability_zones" "available" {
state = "available"
}

RDS DB subnet group resource

resource "aws_db_subnet_group" "rds_subnetgroup" {
name = "rds_subnetgroup"
subnet_ids = ["${var.rds_subnet_one_id}","${var.rds_subnet_two_id}"] // (Required) A list of VPC subnet IDs
tags = {
Name = "rds subnet group"
}
}

RDS Aurora Cluster resource, note that the odm_type variable is used to specify wethere we will create a cluster for the Decision center or the Decison server, using the modular approach we can specify the odm_type value in the main.tf file and use this RDS module two time. that’s the beauty of modular!

resource "aws_rds_cluster" "postgresql_cluster" {
cluster_identifier = "aurora-cluster-${var.odm_type}" // (Optional, Forces new resources) The cluster identifier. If omitted, Terraform will assign a random, unique identifier.
engine = "aurora-postgresql" // (Optional) The name of the database engine to be used for this DB cluster.
availability_zones = ["${data.aws_availability_zones.available.names[0]}"] // (Optional) A list of EC2 Availability Zones for the DB cluster storage where DB cluster instances can be created.
database_name = "${lookup(var.rds_db_name,var.odm_type)}" // database name
master_username = "${lookup(var.rds_db_username,var.odm_type)}" // database username
master_password = "adminadmin" // database password
deletion_protection = true
skip_final_snapshot = fasle
vpc_security_group_ids = ["${var.rds_sg_id}"] // (Optional) List of VPC security groups to associate with the Cluster
apply_immediately = true
db_subnet_group_name = "${aws_db_subnet_group.rds_subnetgroup.name}" // (Optional) A DB subnet group to associate with this DB instance.
}

RDS cluster instance resource

Provides an RDS Cluster Instance Resource. A Cluster Instance Resource defines attributes that are specific to a single instance in a RDS Cluster, specifically running Amazon Aurora

resource "aws_rds_cluster_instance" "cluster_instances" {
identifier = "aurora-cluster-instance-${var.odm_type}" // (Optional, Forces new resource) The identifier for the RDS instance, if omitted, Terraform will assign a random, unique identifier
cluster_identifier = "${aws_rds_cluster.postgresql_cluster.id}" // (Required) The identifier of the aws_rds_cluster in which to launch this instance
instance_class = "db.t3.medium"
db_subnet_group_name = "${aws_db_subnet_group.rds_subnetgroup.name}"
publicly_accessible = true
apply_immediately = true
engine = "${aws_rds_cluster.postgresql_cluster.engine}"
engine_version = "${aws_rds_cluster.postgresql_cluster.engine_version}"
}

output values:

output "rds_endpoint" {
value = "${aws_rds_cluster_instance.cluster_instances.endpoint}"
}
output "rds_username" {
value = "${aws_rds_cluster.postgresql_cluster.master_username}"
}
output "rds_password" {
value = "${aws_rds_cluster.postgresql_cluster.master_password}"
}
output "rds_dbname" {
value = "${aws_rds_cluster.postgresql_cluster.database_name}"
}

Beanstalk module

vars.tf

RDS cluster instance resource

Provides an RDS Cluster Instance Resource. A Cluster Instance Resource defines attributes that are specific to a single instance in a RDS Cluster, specifically running Amazon Aurora

eb_service_role: (Required) The ARN of an IAM service role under which the application version is deleted. Elastic Beanstalk must have permission to assume this role

variable "eb_service_role" {
default = "arn:aws:iam::653258475801:role/aws-elasticbeanstalk-service-role"
}

beasntalk.tf

Beanstalk application resource

Provides an Elastic Beanstalk Application Resource. Elastic Beanstalk allows you to deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications.

resource "aws_elastic_beanstalk_application" "odm" {
name = "odm-app" // (Required) The name of the application, must be unique within your account
description = "ibm operational decison manager application"
appversion_lifecycle {
service_role = "${var.eb_service_role}"
max_count = 128 // (Optional) The maximum number of application versions to retain
delete_source_from_s3 = true // (Optional) Set to true to delete a version's source bundle from S3 when the application version is deleted.
}
}
output "app_name" {
value = "${aws_elastic_beanstalk_application.odm.name}"
}

environment

vars.tf

variable "Endpoint" {}
variable "dbUsername" {}
variable "dbPassword" {}
variable "dbName" {}
variable "app_version_source" {}
variable "bucket_name" {}
variable "odm_type" {}
variable "app_name" {}
variable "vpc_id" {}
variable "vpc_subnet_id" {}
variable "ec2_image_id" {
default = "ami-0e2ff28bfb72a4e45"
}
variable "sgs_id" {}

environment.tf

The S3 object data source allows access to the metadata and content of an object stored inside S3 bucket. we use this resource to retrieve the zip file used to deploy the Decision center and Decision server.

data "aws_s3_bucket_object" "ibm_odm_zip" {
bucket = "${var.bucket_name}"
key = "${var.app_version_source}"
}

Beanstalk application version resource

Provides an Elastic Beanstalk Application Version Resource. Elastic Beanstalk allows you to deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications.

This resource creates a Beanstalk Application Version that can be deployed to a Beanstalk Environment.

resource "aws_elastic_beanstalk_application_version" "odm_version" {
name = "odm_v1"
application = "${var.app_name}"
description = "ibm odm version 1"
bucket = "${data.aws_s3_bucket_object.ibm_odm_zip.bucket}"
key = "${data.aws_s3_bucket_object.ibm_odm_zip.key}"
}

Beanstalk configuration template resource

Provides an Elastic Beanstalk Configuration Template, which are associated with a specific application and are used to deploy different versions of the application with the same configuration settings.

The setting field supports the following format:

  • namespace: unique namespace identifying the option’s associated AWS resource
  • name: name of the configuration option
  • value: value for the configuration option

you can find all the related settings to customize your configuration template in this link

resource "aws_elastic_beanstalk_configuration_template" "template" {
name = "odm-template-config-${var.odm_type}"
application = "${var.app_name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v3.3.2 running Tomcat 8.5 Java 8" // (Optional) A solution stack to base your Template off of.
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "${var.vpc_id}"
}
setting {
namespace = "aws:ec2:vpc"
name = "AssociatePublicIpAddress"
value = true
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "${var.vpc_subnet_id}"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "1"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = "1"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "ImageId"
value = "${var.ec2_image_id}"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "SecurityGroups"
value = "${var.sgs_id}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "endpoint"
value = "${var.Endpoint}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "username"
value = "${var.dbUsername}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "password"
value = "${var.dbPassword}"
}
setting {
namespace = "aws:elasticbeanstalk:application:environment"
name = "dbname"
value = "${var.dbName}"
}
}

Beanstalk environment resource

Provides an Elastic Beanstalk Environment Resource. Elastic Beanstalk allows you to deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications.

resource "aws_elastic_beanstalk_environment" "env" {
name = "environment-${var.odm_type}"
application = "${var.app_name}"
template_name = "${aws_elastic_beanstalk_configuration_template.template.name}"
version_label = "${aws_elastic_beanstalk_application_version.odm_version.name}"
}

main.tf file

here you will see the use of modular approach and output variables, take some time to read the following code.

module "vpc" {
source = "./modules/vpc"
vpc_id = "${module.vpc.vpc_id}"
}
module "securityGroup" {
source = "./modules/SecurityGroups"
vpc_id = "${module.vpc.vpc_id}"
odm_subnet_cidr_block = "${module.vpc.odm_subnet_cidr_block}"
}
module "rds_dc" {
source = "./modules/RDSAurora"
rds_sg_id = "${module.securityGroup.rds_sg_id}"
rds_subnet_one_id = "${module.vpc.rds_subnet_one_id}"
rds_subnet_two_id = "${module.vpc.rds_subnet_two_id}"
odm_type = "dc"
}
module "rds_ds" {
source = "./modules/RDSAurora"
rds_sg_id = "${module.securityGroup.rds_sg_id}"
rds_subnet_one_id = "${module.vpc.rds_subnet_one_id}"
rds_subnet_two_id = "${module.vpc.rds_subnet_two_id}"
odm_type = "ds"
}
module "beanstalk" {
source = "./modules/beanstalk"
}
module "env_dc" {
vpc_id = "${module.vpc.vpc_id}"
vpc_subnet_id = "${module.vpc.odm_subnet_id}"
sgs_id = "${module.securityGroup.http_sg_id}"
source = "./modules/bn_env"
Endpoint = "${module.rds_dc.rds_endpoint}"
dbUsername = "${module.rds_dc.rds_username}"
dbPassword = "${module.rds_dc.rds_password}"
dbName = "${module.rds_dc.rds_dbname}"
app_version_source = "ODM_DC.zip"
bucket_name = "beanstalk-ibm-odm"
odm_type = "dc"
app_name = "${module.beanstalk.app_name}"
}
module "env_ds" {
vpc_id = "${module.vpc.vpc_id}"
vpc_subnet_id = "${module.vpc.odm_subnet_id}"
sgs_id = "${module.securityGroup.http_sg_id}"
source = "./modules/bn_env"
Endpoint = "${module.rds_ds.rds_endpoint}"
dbUsername = "${module.rds_ds.rds_username}"
dbPassword = "${module.rds_ds.rds_password}"
dbName = "${module.rds_ds.rds_dbname}"
app_version_source = "IBM-DS-8921-Tomcat.zip"
bucket_name = "beanstalk-ibm-odm"
odm_type = "ds"
app_name = "${module.beanstalk.app_name}"
}
Running the scripts

Terraform is controlled via a very easy to use command-line interface (CLI). Terraform is only a single command-line application: Terraform.

in ordre to run your Terraform code, you need to follow these steps

Command: init

The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

Command: plan

The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.

This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.

Command: apply

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

when finished you can run the terraform destroy.this command is used to destroy the Terraform-managed infrastructure.

Conclusion

We hope we’ve given you a good idea of how you can leverage the flexibility of Terraform to make deploying IBM’s ODM installation in AWS less difficult. By using modules that logically correlate to your the ODM’s requirements.

A cloud in mind ?
Contact us !

How can we help ?