Monday, January 22, 2018

Plan of Happiness


Check out new video uploaded by Darren at January 21, 2018 at 10:26PM. Heavenly Father has a Plan of Happiness for us. This short video shows his plan and answers the question: Who am i? Why am I here? and where do I go after I die?

Thursday, January 18, 2018

Building Hybrid Cloud


Check out new video uploaded by Darren at January 18, 2018 at 10:48AM. Integrating Cloud Management Platforms, PAAS and Automation Frameworks help deliver Hybrid Clouds.

Building Hybrid Cloud

Saturday, January 6, 2018

Hybrid Cloud Adoption


Check out new video uploaded by Darren at January 6, 2018 at 11:53AM. What is driving IT transformation? Why are developers moving to public cloud and how is IT leveraging hybrid cloud technologies to capture developers workloads

Wednesday, November 15, 2017

Sails 1.0 added the concept of actions to the architecture. This has given me an idea to add actions to the bouquet generate suite. An action is basically a function that is called when a route in a controller is accessed. Each action is in it own file. Which makes life very easy for generators. 

Bouquet Actions

I recently (Nov 2017) extended bouquet to handle the creation of Actions for Controllers. The concept behind this is to auto generate tests, command line interface and controllers for the actions created.

Pattern


  1. An action is created for a specific controller. in the controllers/<controller name> directory.
  2. And a corresponding binary is created to access the action. bin/<projectName>-<controller>-<action>.
  3. Next a test for the binary is created in the test/bin directory <controller>-<name>.test.js
  4. Finally a test set of test cases is created for the action via the controller test/integration/<controller>-<name>.test.js

Here is a breakdown of what gets created.


  • api/controllers/<controller>/<action>.js
  • bin/<project name>-<controller name>-<action name>     
  • test/bin/<controller-name>-<action-name>.test.js
  • test/integration/<controller-name>-<action-name>.test.js

Usage

$ sails generate bouquet-Action <controller> <action>
In this example I am generating a action name create for the stack controller.
$ sails generate bouquet-Action stack create 
will generate
# api/controllers/stack/create.js
# bin/bouquet-stack-create
# test/bin/stack-create.test.js
# test/integration/stack-create.test.js

If you have any additional ideas just let me know darren@pulsipher.org.

DWP


Monday, September 18, 2017

Benefits of Hybrid Cloud

IT organizations are in the middle of a fundamental paradigm shift (Business jargon). Application developers and line of business owners are demanding agility in their IT infrastructure. IT is having a hard time keeping up and as a result are losing these customers to the public cloud. CIO’s are in the process of trying to change but they are being driven by the technology fads of the day instead of coming up with a solid strategy moving forward. Hybrid cloud architecture is a solid strategy that satisfies the needs of the demanding application developers and line of business; while keeping IT’s core tenets of efficiency, security and reliability. Hybrid clouds give application developers what they want while still adhering to these tenets through flexibility, agility, predictive performance, efficiency, and security & compliance.

Agility

Application Developers are working in a highly competitive economy. They need to move fast and change quickly to outmaneuver their competition.  In order to do this, they need data center infrastructure and services instantly. They cannot wait to purchase machines and stand up new servers, network and storage.  They need it now. As quickly as they need infrastructure they abandon it when the competitive landscape demands a pivot in product direction.  This leads many Development teams to move to the public cloud.

They can quickly spin up and spin down infrastructure in a matter of minutes. This gives them the agility that they are looking for. But there is a high cost to blindly using public clouds. One of the largest problems with Development teams migration to the public cloud is they don’t plan for the ability to connect data and services from their current data center into their public cloud infrastructure instances. Typically Development teams develop in isolation until test and production deployments. Many key elements of production software are left to the last minute when applications are ported to production environments. In some cases production environments require data from legacy infrastructure, compliance and security processes, and services from on-prem applications. When these critical elements are integrated in the late stages of deploying applications into product, they can cause delays and the benefits of agility are more than wiped out.

By establishing a hybrid model, many of the integration points are exposed to the development team early. The connectivity to legacy or on-prem data and services, is handled securely and within compliance standards and processes.  Hybrid cloud tools like Cloud Management Platforms (CMP), Platform as a Service (PaaS) and automation frameworks, reduce the manual steps and increase repeatability resulting in speed to deploying usable products. Without these hybrid cloud tools and processes, deploying applications that span traditional data center and public cloud, become unmanageable, unyielding and prone to cyber-attacks.

Flexibility

Public clouds give the ability to stand up infrastructure with a click of a button. This gives developers an “easy button” to deploy needed infrastructure and services. Many cloud service providers are looking for ways to lock in developers into their services and infrastructure. One strategy to prevent vendor lock in is deploy a Cloud Management Platform (CMP) portal. These CMP(s) are essential in developing a Hybrid cloud architecture. They give developers the ability to ask for services and infrastructure without necessarily knowing which cloud is running their applications. Why is this beneficial? Flexibility.

First is application portability. Developers will not become tied to one cloud’s way of doing things (public or private). This means that they will write code that can easily be transported between clouds. Good developers will do that automatically, but when working in specific cloud infrastructure software engineers tend to follow the new shiny objects that cloud service providers are so good at putting in front of them.

Second is operational flexibility. Any successful CIO will build flexibility into their IT organization to meet the ever changing needs of their customers. You want to make sure that you have the same flexibility when you are deploying a cloud strategy. You need the ability to move workloads between different cloud offerings both private and public, depending on the current environment. The goal is to give your customers what they want in a secure, cost conscience and reliable manner. That may mean moving workloads from one public cloud vendor to another based on price; moving from private to public clouds during a data center upgrade; or responding to a cyber-attack by quarantining off effect infrastructure and spinning up new infrastructure in a hardened cloud.

The key here is to abstract away the clouds services and infrastructure stickiness so you can freely move between clouds workloads, data and applications based business drivers. If you don’t put a hybrid cloud solution in place you end up making business decisions based off of the stickiness of a cloud solution instead of the core values of your business.

Predictive Performance

One of the biggest problems with public cloud is something called the noisy neighbor. Public Cloud does a great job at utilizing all of the CPU, memory, network and storage in their infrastructure. This is where they make most of their money. Their data centers are run at incredible utilization numbers. They accomplish this by over provisioning resources. Most application and services are sitting idle much of the time. They can actually fit more applications and services on one box than most IT organizations are comfortable with. But this comes at a price.

When using a public cloud, many times you do not know what or who else is running on the same machine, storage array, or network as you. For some of the workloads this is not a problem. Your application or service runs something and then you are waiting for the next request. But if you have an application where you need to have more predictive performance, public clouds give such un-predictive results that it becomes unusable.

One example of this is automated build systems. Most build engineers know that some application builds can “hang” indefinitely due to coding errors. Many times they put “timeouts” on the builds to catch bad builds that don’t finish. When there is a noisy neighbor in your public cloud your build times can vary so much that these timeouts become useless. Build Engineers will tell you that a 1 hour build that is consistently 1 hour is far more desirable than a build that ranges from 15 minutes to 2 hours.

A hybrid cloud strategy gives you the ability to put “predictable sensitive” workloads and applications on private clouds. Other workloads and applications on public cloud infrastructure. Many hybrid tools give you the ability to characterize workloads with Quality of Service (QoS) requirements. This aids in the automatic placement of workloads on different cloud infrastructure.

Security and compliance

Cyber-attacks are up. Government and security agencies have increased regulations to help combat these malicious organizations. Protecting data and infrastructure has become the leading concern in most IT organizations. Some strategies to protecting data is to restrict data on specific infrastructure. Some are suggesting that public cloud or any virtualized infrastructure should not be used for specific types of data. As the regulations change, IT organizations need to have a strategy that gives them the flexibility to move data and workloads to different level of secure infrastructure.

Having a hybrid cloud strategy can help IT organizations with security and compliance in several ways: ability to move workloads between clouds, deploy and manage security policies and procedures across multiple clouds, and audit and monitor workloads.

Moving workloads across clouds. There are times when having the ability to move workloads and data from one cloud to another is critical to recovering from a cyber-attack. Having the flexibility to move workloads from an infected infrastructure to a different cloud or sanitized infrastructure is something a hybrid cloud architecture can handle through integrating a Cloud Management Platform (CPM) and an automation framework.

Deploying and Managing Policies and Procedures. Hybrid cloud tools gives systems operators the ability to enforce security and compliance policies across on-prem traditional infrastructure as well as infrastructure running in the public clouds. These tools give a “single pane of glass” interface to help manage these diverse systems and infrastructure. They also give security operators the ability to specialize policies based on the physical location of the cloud resources both public and private.

Auditing and monitoring. One of the key aspects to security and compliance is monitoring what is going on in your infrastructure. There are many great tools in this space and making sure that your security monitoring tools are monitoring your public and private cloud assets is key. If you are only watching your private cloud infrastructure you are exposed to malicious attacks coming through your public cloud and potentially infecting your private cloud or legacy infrastructure assets.

Hybrid cloud tools give you control over all of your infrastructure and workloads regardless of their location (private or public clouds). Take advantage of these tools when deploying your hybrid cloud strategy.

Efficiency

Public Cloud does an incredible job driving efficiency in their infrastructure. Their goal is to run all of their machines at the highest utilization numbers possible. This can sometimes be diametrically opposed to predictive performance. At the same time if you want a process or workload to run in the same time every time you run it, you have to reserve a machine just to run that workload which drives your utilization numbers very low decreasing your efficiency and increasing your cost.

Another problem that we see is the problem of abandoned workloads and VMs. These workloads are sitting idle not doing anything. They are using some storage resources, but they are not using any CPU or network. In the public cloud you are charged for this abandoned infrastructure. And that can come at a large price tag. One of my customers had over 70% of their VMs in public cloud that were abandoned.  That’s right, they paid 70% more than they needed to. But the problem is not only relegated to public cloud, private cloud has a similar problem. Abandoned infrastructure can waste storage and valuable VM slots in the infrastructure. So how do you fix this? Architecting a good hybrid cloud strategy can help decrease abandoned infrastructure in a couple of ways: visibility and dynamic provisioning.

Visibility into all of your resources both public and private cloud is key to controlling costs. Cloud Management Platforms (CMP) give you a “single pane of glass” across all of your clouds and lets you control costs by identifying abandoned resources and dispositioning them (Kill them or back them up). This saves real money in the public clouds by getting rid of old infrastructure that is not being used and just costing you money. For the private cloud it frees up resources that can be utilized for other workloads. In turn driving up your efficiency.

Dynamic provisioning is another great tool that hybrid clouds give an advantage. Many CMPs have cost modeling built into their tools. This means that I can provision infrastructure based on cost. Public Clouds are starting to compete on price and use the concept of spot instances which give consumers lower prices for infrastructure for a period of time. Cloud brokers (part of a CMP) basically shops around for the lowest price while still maintaining the QoS for the specific workload. This decreases the overall cost of running the workload. This also gives you visibility into your actual cost for using a particular public or private cloud.

Call to Action

Hybrid cloud architectures are giving CIOs the ability to get in front of the demands of their customers, but there is still some heavy lifting that has to happen. Building a hybrid cloud strategy includes organizational, behavioral and technical change that cannot happen overnight.  Developing a strong architectural vision and roadmap are key to rolling out a hybrid cloud strategy that can take advantage of hybrid clouds strengths and prevent the thrash of the technical industries “shiny object” of the month.

DWP

Wednesday, August 30, 2017

Managing Multiple environments in a Hybrid Cloud

With the shift from traditional Client Server Application Software to Cloud Aware Application many Software Engineers have found themselves dusting off old System Administration Books from college. With multiple services running on multiple machine or containers software engineers have to be able to manage their applications across more and more complex environments. As I have been talking to some of my customers I have found common pain points in managing these complex applications: 
  • Consistency between environments
  • Single point of failure services
  • Differing environment requirements  (Not all environments are created equal)
  • Managing multiple environments across multiple clouds
All of these factors and many more can lead to time wasted, applications being released into production before their time, or worst of all unhappy software engineers.

DevOps to the rescue?

Wouldn't it be nice if the software engineer just worried about their application and its code, instead of all of the environments that it has to run on? In some places that is exactly what happens. Developers develop on their local laptops or in a development cloud and then check in their code and it moves to production. DevOps cleans up any problems with applications using single instance bottle-necked services, out of sync versions of centralized services, or adding load balancing services to the front end or back end of the application. The App developers have no clue what mess they have caused with their code changes, or a new version of service that they are using. Somehow we need to make sure that the application developer is still connected to the application architecture but disconnected  from the complexity of managing multiple environments.

Single Definition Multiple Environments

Working on my Local machine

One approach that I have been looking at is having the ability to define my application as a set of service templates. In this simple example I have a simple Node JS application that uses Redis and MongoDB. If I use a yaml format. It might look something like this.

  1. MyApp:     
  2.   Services:  
  3.     web: NodeJS  
  4.       ports: 80  
  5.       links: mqueue, database  
  6.     mqueue: Redis  
  7.       ports: 6789  
  8.     database: MongoDB  
  9.       ports: 25678, 31502  


So with this definition I would like to deploy my application on my local box, using Virtual Box. I put this yaml file in my home directory of the application. This should be very familiar to those of you that have used docker-compose. Now I should be able to launch my application on my local machine using a command similar to docker-compose.
$ c3 up
After a couple of minutes my multi-service application is running on my local laptop.
I can change the application code and even make changes to the services that I need to work with.

Working in a Development Cloud

Now that I have it running on my laptop I want to make sure that I can run it in a cloud. Most organizations work with development clouds. Typically development clouds are not as big as production and test clouds but give the developer a good place to try out new code and debug problems found in production and test environments. Ideally the developer should use the same application definition and just point to another environment to launch the application.
$ c3 up --env=Dev
This launches the same application in the development environment. Which could be a OpenStack, VMWare or Kubernetes based SDI solution. The developer really does not care about how the infrastructure gets provisioned, just that it is done quickly and reliably. On quick inspection we see a slight difference in the services that are running in the development cloud. There is another instance of the NodeJS service running. This comes from the service definition of the NodeJS service. The NodeJS service is defined to have multiple instance in the development cloud and only one instance in the local environment.

NodeJS.yml - Service Definition
  1. NodeJS:  
  2.   Local:  
  3.    web:  
  4.       image: node-3.0.2  
  5.       port: 1337  
  6.   Dev:  
  7.     web:  
  8.      image: node-3.0.2  
  9.      port: 1337  
  10.     worker:  
  11.      image: node-3.0.2  
  12.      port:1338  
  13.      cardinality: 3  
  14.   Test: …  
  15.   Prod: …  
This definition is produced by the service and stack developer not the application developer. So the service can be reused by several developers and can be defined for different environments (Local, Dev, Test, & Production). This ensures that services are defined for the different requirements of the environments. For example Production NodeJS might have a NGNX load balancer on the front end of it for serving up NodeJS web services for each user logged in. The key is that this is defined for the Service

that is reused. This increases re-usability and quality at the same time.

Working in the Test Cloud

Now that I have tried my application in the development cloud. It is time to run it through a series of tests before it gets pushed to production. This is just as easy for the developer as working in the development cloud.
$ c3 up --env=Test
$ c3 run --env=Test --exec runTestSuites
We launched the environment and then run the test suites in that environment. When the environment launches you can see additional instances of the same services we have seen before in the development cloud. Additionally, there is a new service running in the environment. The Perf Monitor Service is also running. It is monitoring the performance of the services while the tests are running. Where did the definition of this service come from? It came from the application stack definition. This definition just like the service definition can show that the application can have a different service landscape for each environment. But the software developer still sees them as the same. That is to say, code should not change based on the environment that is running the application. This decouples the application from the environment and frees up the software developer to focus on code and not environments.

What about Production 

The ultimate goal of course is to get the application into production. Some organizations, the smart ones, don't let developers publish directly into production without some gates to pass thru. So instead of just calling "c3 up --env=Prod" we have a publish mechanism that versions the application, its configurations and supporting services.
$ c3 publish --version=1.0.2
In this case the application is published and tagged with version 1.0.2. Once the application is published, it will then launch the environment if it is not currently running. If it is running then it will "upgrade the service" to the new version. The upgrade process will be covered in another blog. Needless to say it allows for rolling updates with minimum or no downtime. As you can see additional services have been added and some taken away from the test environment.

Happy "Coder" Happy Company

The software engineer in this story focuses on writing software not on the environment. Services are being reused from application to application. Environment requirements are being met with service and application definitions. Stack and service developers are focusing on writing services for reuse instead of fixing application developers code. Now your company can run fast and deploy quality products into production,

Check out more detailed architecture and use cases on github at https://github.com/CAADE/C3/wiki.

You can see the video of this blog here


DWP