Wednesday, November 15, 2017

Sails 1.0 added the concept of actions to the architecture. This has given me an idea to add actions to the bouquet generate suite. An action is basically a function that is called when a route in a controller is accessed. Each action is in it own file. Which makes life very easy for generators. 

Bouquet Actions

I recently (Nov 2017) extended bouquet to handle the creation of Actions for Controllers. The concept behind this is to auto generate tests, command line interface and controllers for the actions created.

Pattern


  1. An action is created for a specific controller. in the controllers/<controller name> directory.
  2. And a corresponding binary is created to access the action. bin/<projectName>-<controller>-<action>.
  3. Next a test for the binary is created in the test/bin directory <controller>-<name>.test.js
  4. Finally a test set of test cases is created for the action via the controller test/integration/<controller>-<name>.test.js

Here is a breakdown of what gets created.


  • api/controllers/<controller>/<action>.js
  • bin/<project name>-<controller name>-<action name>     
  • test/bin/<controller-name>-<action-name>.test.js
  • test/integration/<controller-name>-<action-name>.test.js

Usage

$ sails generate bouquet-Action <controller> <action>
In this example I am generating a action name create for the stack controller.
$ sails generate bouquet-Action stack create 
will generate
# api/controllers/stack/create.js
# bin/bouquet-stack-create
# test/bin/stack-create.test.js
# test/integration/stack-create.test.js

If you have any additional ideas just let me know darren@pulsipher.org.

DWP


Monday, September 18, 2017

Benefits of Hybrid Cloud

IT organizations are in the middle of a fundamental paradigm shift (Business jargon). Application developers and line of business owners are demanding agility in their IT infrastructure. IT is having a hard time keeping up and as a result are losing these customers to the public cloud. CIO’s are in the process of trying to change but they are being driven by the technology fads of the day instead of coming up with a solid strategy moving forward. Hybrid cloud architecture is a solid strategy that satisfies the needs of the demanding application developers and line of business; while keeping IT’s core tenets of efficiency, security and reliability. Hybrid clouds give application developers what they want while still adhering to these tenets through flexibility, agility, predictive performance, efficiency, and security & compliance.

Agility

Application Developers are working in a highly competitive economy. They need to move fast and change quickly to outmaneuver their competition.  In order to do this, they need data center infrastructure and services instantly. They cannot wait to purchase machines and stand up new servers, network and storage.  They need it now. As quickly as they need infrastructure they abandon it when the competitive landscape demands a pivot in product direction.  This leads many Development teams to move to the public cloud.

They can quickly spin up and spin down infrastructure in a matter of minutes. This gives them the agility that they are looking for. But there is a high cost to blindly using public clouds. One of the largest problems with Development teams migration to the public cloud is they don’t plan for the ability to connect data and services from their current data center into their public cloud infrastructure instances. Typically Development teams develop in isolation until test and production deployments. Many key elements of production software are left to the last minute when applications are ported to production environments. In some cases production environments require data from legacy infrastructure, compliance and security processes, and services from on-prem applications. When these critical elements are integrated in the late stages of deploying applications into product, they can cause delays and the benefits of agility are more than wiped out.

By establishing a hybrid model, many of the integration points are exposed to the development team early. The connectivity to legacy or on-prem data and services, is handled securely and within compliance standards and processes.  Hybrid cloud tools like Cloud Management Platforms (CMP), Platform as a Service (PaaS) and automation frameworks, reduce the manual steps and increase repeatability resulting in speed to deploying usable products. Without these hybrid cloud tools and processes, deploying applications that span traditional data center and public cloud, become unmanageable, unyielding and prone to cyber-attacks.

Flexibility

Public clouds give the ability to stand up infrastructure with a click of a button. This gives developers an “easy button” to deploy needed infrastructure and services. Many cloud service providers are looking for ways to lock in developers into their services and infrastructure. One strategy to prevent vendor lock in is deploy a Cloud Management Platform (CMP) portal. These CMP(s) are essential in developing a Hybrid cloud architecture. They give developers the ability to ask for services and infrastructure without necessarily knowing which cloud is running their applications. Why is this beneficial? Flexibility.

First is application portability. Developers will not become tied to one cloud’s way of doing things (public or private). This means that they will write code that can easily be transported between clouds. Good developers will do that automatically, but when working in specific cloud infrastructure software engineers tend to follow the new shiny objects that cloud service providers are so good at putting in front of them.

Second is operational flexibility. Any successful CIO will build flexibility into their IT organization to meet the ever changing needs of their customers. You want to make sure that you have the same flexibility when you are deploying a cloud strategy. You need the ability to move workloads between different cloud offerings both private and public, depending on the current environment. The goal is to give your customers what they want in a secure, cost conscience and reliable manner. That may mean moving workloads from one public cloud vendor to another based on price; moving from private to public clouds during a data center upgrade; or responding to a cyber-attack by quarantining off effect infrastructure and spinning up new infrastructure in a hardened cloud.

The key here is to abstract away the clouds services and infrastructure stickiness so you can freely move between clouds workloads, data and applications based business drivers. If you don’t put a hybrid cloud solution in place you end up making business decisions based off of the stickiness of a cloud solution instead of the core values of your business.

Predictive Performance

One of the biggest problems with public cloud is something called the noisy neighbor. Public Cloud does a great job at utilizing all of the CPU, memory, network and storage in their infrastructure. This is where they make most of their money. Their data centers are run at incredible utilization numbers. They accomplish this by over provisioning resources. Most application and services are sitting idle much of the time. They can actually fit more applications and services on one box than most IT organizations are comfortable with. But this comes at a price.

When using a public cloud, many times you do not know what or who else is running on the same machine, storage array, or network as you. For some of the workloads this is not a problem. Your application or service runs something and then you are waiting for the next request. But if you have an application where you need to have more predictive performance, public clouds give such un-predictive results that it becomes unusable.

One example of this is automated build systems. Most build engineers know that some application builds can “hang” indefinitely due to coding errors. Many times they put “timeouts” on the builds to catch bad builds that don’t finish. When there is a noisy neighbor in your public cloud your build times can vary so much that these timeouts become useless. Build Engineers will tell you that a 1 hour build that is consistently 1 hour is far more desirable than a build that ranges from 15 minutes to 2 hours.

A hybrid cloud strategy gives you the ability to put “predictable sensitive” workloads and applications on private clouds. Other workloads and applications on public cloud infrastructure. Many hybrid tools give you the ability to characterize workloads with Quality of Service (QoS) requirements. This aids in the automatic placement of workloads on different cloud infrastructure.

Security and compliance

Cyber-attacks are up. Government and security agencies have increased regulations to help combat these malicious organizations. Protecting data and infrastructure has become the leading concern in most IT organizations. Some strategies to protecting data is to restrict data on specific infrastructure. Some are suggesting that public cloud or any virtualized infrastructure should not be used for specific types of data. As the regulations change, IT organizations need to have a strategy that gives them the flexibility to move data and workloads to different level of secure infrastructure.

Having a hybrid cloud strategy can help IT organizations with security and compliance in several ways: ability to move workloads between clouds, deploy and manage security policies and procedures across multiple clouds, and audit and monitor workloads.

Moving workloads across clouds. There are times when having the ability to move workloads and data from one cloud to another is critical to recovering from a cyber-attack. Having the flexibility to move workloads from an infected infrastructure to a different cloud or sanitized infrastructure is something a hybrid cloud architecture can handle through integrating a Cloud Management Platform (CPM) and an automation framework.

Deploying and Managing Policies and Procedures. Hybrid cloud tools gives systems operators the ability to enforce security and compliance policies across on-prem traditional infrastructure as well as infrastructure running in the public clouds. These tools give a “single pane of glass” interface to help manage these diverse systems and infrastructure. They also give security operators the ability to specialize policies based on the physical location of the cloud resources both public and private.

Auditing and monitoring. One of the key aspects to security and compliance is monitoring what is going on in your infrastructure. There are many great tools in this space and making sure that your security monitoring tools are monitoring your public and private cloud assets is key. If you are only watching your private cloud infrastructure you are exposed to malicious attacks coming through your public cloud and potentially infecting your private cloud or legacy infrastructure assets.

Hybrid cloud tools give you control over all of your infrastructure and workloads regardless of their location (private or public clouds). Take advantage of these tools when deploying your hybrid cloud strategy.

Efficiency

Public Cloud does an incredible job driving efficiency in their infrastructure. Their goal is to run all of their machines at the highest utilization numbers possible. This can sometimes be diametrically opposed to predictive performance. At the same time if you want a process or workload to run in the same time every time you run it, you have to reserve a machine just to run that workload which drives your utilization numbers very low decreasing your efficiency and increasing your cost.

Another problem that we see is the problem of abandoned workloads and VMs. These workloads are sitting idle not doing anything. They are using some storage resources, but they are not using any CPU or network. In the public cloud you are charged for this abandoned infrastructure. And that can come at a large price tag. One of my customers had over 70% of their VMs in public cloud that were abandoned.  That’s right, they paid 70% more than they needed to. But the problem is not only relegated to public cloud, private cloud has a similar problem. Abandoned infrastructure can waste storage and valuable VM slots in the infrastructure. So how do you fix this? Architecting a good hybrid cloud strategy can help decrease abandoned infrastructure in a couple of ways: visibility and dynamic provisioning.

Visibility into all of your resources both public and private cloud is key to controlling costs. Cloud Management Platforms (CMP) give you a “single pane of glass” across all of your clouds and lets you control costs by identifying abandoned resources and dispositioning them (Kill them or back them up). This saves real money in the public clouds by getting rid of old infrastructure that is not being used and just costing you money. For the private cloud it frees up resources that can be utilized for other workloads. In turn driving up your efficiency.

Dynamic provisioning is another great tool that hybrid clouds give an advantage. Many CMPs have cost modeling built into their tools. This means that I can provision infrastructure based on cost. Public Clouds are starting to compete on price and use the concept of spot instances which give consumers lower prices for infrastructure for a period of time. Cloud brokers (part of a CMP) basically shops around for the lowest price while still maintaining the QoS for the specific workload. This decreases the overall cost of running the workload. This also gives you visibility into your actual cost for using a particular public or private cloud.

Call to Action

Hybrid cloud architectures are giving CIOs the ability to get in front of the demands of their customers, but there is still some heavy lifting that has to happen. Building a hybrid cloud strategy includes organizational, behavioral and technical change that cannot happen overnight.  Developing a strong architectural vision and roadmap are key to rolling out a hybrid cloud strategy that can take advantage of hybrid clouds strengths and prevent the thrash of the technical industries “shiny object” of the month.

DWP

Wednesday, August 30, 2017

Managing Multiple environments in a Hybrid Cloud

With the shift from traditional Client Server Application Software to Cloud Aware Application many Software Engineers have found themselves dusting off old System Administration Books from college. With multiple services running on multiple machine or containers software engineers have to be able to manage their applications across more and more complex environments. As I have been talking to some of my customers I have found common pain points in managing these complex applications: 
  • Consistency between environments
  • Single point of failure services
  • Differing environment requirements  (Not all environments are created equal)
  • Managing multiple environments across multiple clouds
All of these factors and many more can lead to time wasted, applications being released into production before their time, or worst of all unhappy software engineers.

DevOps to the rescue?

Wouldn't it be nice if the software engineer just worried about their application and its code, instead of all of the environments that it has to run on? In some places that is exactly what happens. Developers develop on their local laptops or in a development cloud and then check in their code and it moves to production. DevOps cleans up any problems with applications using single instance bottle-necked services, out of sync versions of centralized services, or adding load balancing services to the front end or back end of the application. The App developers have no clue what mess they have caused with their code changes, or a new version of service that they are using. Somehow we need to make sure that the application developer is still connected to the application architecture but disconnected  from the complexity of managing multiple environments.

Single Definition Multiple Environments

Working on my Local machine

One approach that I have been looking at is having the ability to define my application as a set of service templates. In this simple example I have a simple Node JS application that uses Redis and MongoDB. If I use a yaml format. It might look something like this.

  1. MyApp:     
  2.   Services:  
  3.     web: NodeJS  
  4.       ports: 80  
  5.       links: mqueue, database  
  6.     mqueue: Redis  
  7.       ports: 6789  
  8.     database: MongoDB  
  9.       ports: 25678, 31502  


So with this definition I would like to deploy my application on my local box, using Virtual Box. I put this yaml file in my home directory of the application. This should be very familiar to those of you that have used docker-compose. Now I should be able to launch my application on my local machine using a command similar to docker-compose.
$ c3 up
After a couple of minutes my multi-service application is running on my local laptop.
I can change the application code and even make changes to the services that I need to work with.

Working in a Development Cloud

Now that I have it running on my laptop I want to make sure that I can run it in a cloud. Most organizations work with development clouds. Typically development clouds are not as big as production and test clouds but give the developer a good place to try out new code and debug problems found in production and test environments. Ideally the developer should use the same application definition and just point to another environment to launch the application.
$ c3 up --env=Dev
This launches the same application in the development environment. Which could be a OpenStack, VMWare or Kubernetes based SDI solution. The developer really does not care about how the infrastructure gets provisioned, just that it is done quickly and reliably. On quick inspection we see a slight difference in the services that are running in the development cloud. There is another instance of the NodeJS service running. This comes from the service definition of the NodeJS service. The NodeJS service is defined to have multiple instance in the development cloud and only one instance in the local environment.

NodeJS.yml - Service Definition
  1. NodeJS:  
  2.   Local:  
  3.    web:  
  4.       image: node-3.0.2  
  5.       port: 1337  
  6.   Dev:  
  7.     web:  
  8.      image: node-3.0.2  
  9.      port: 1337  
  10.     worker:  
  11.      image: node-3.0.2  
  12.      port:1338  
  13.      cardinality: 3  
  14.   Test: …  
  15.   Prod: …  
This definition is produced by the service and stack developer not the application developer. So the service can be reused by several developers and can be defined for different environments (Local, Dev, Test, & Production). This ensures that services are defined for the different requirements of the environments. For example Production NodeJS might have a NGNX load balancer on the front end of it for serving up NodeJS web services for each user logged in. The key is that this is defined for the Service

that is reused. This increases re-usability and quality at the same time.

Working in the Test Cloud

Now that I have tried my application in the development cloud. It is time to run it through a series of tests before it gets pushed to production. This is just as easy for the developer as working in the development cloud.
$ c3 up --env=Test
$ c3 run --env=Test --exec runTestSuites
We launched the environment and then run the test suites in that environment. When the environment launches you can see additional instances of the same services we have seen before in the development cloud. Additionally, there is a new service running in the environment. The Perf Monitor Service is also running. It is monitoring the performance of the services while the tests are running. Where did the definition of this service come from? It came from the application stack definition. This definition just like the service definition can show that the application can have a different service landscape for each environment. But the software developer still sees them as the same. That is to say, code should not change based on the environment that is running the application. This decouples the application from the environment and frees up the software developer to focus on code and not environments.

What about Production 

The ultimate goal of course is to get the application into production. Some organizations, the smart ones, don't let developers publish directly into production without some gates to pass thru. So instead of just calling "c3 up --env=Prod" we have a publish mechanism that versions the application, its configurations and supporting services.
$ c3 publish --version=1.0.2
In this case the application is published and tagged with version 1.0.2. Once the application is published, it will then launch the environment if it is not currently running. If it is running then it will "upgrade the service" to the new version. The upgrade process will be covered in another blog. Needless to say it allows for rolling updates with minimum or no downtime. As you can see additional services have been added and some taken away from the test environment.

Happy "Coder" Happy Company

The software engineer in this story focuses on writing software not on the environment. Services are being reused from application to application. Environment requirements are being met with service and application definitions. Stack and service developers are focusing on writing services for reuse instead of fixing application developers code. Now your company can run fast and deploy quality products into production,

Check out more detailed architecture and use cases on github at https://github.com/CAADE/C3/wiki.

You can see the video of this blog here


DWP

Tuesday, August 29, 2017

Building Microservices with SailsJS and NodeJS

I have been developing applications with uServices for sometime. Each time I wrote a new application I could not figure out where to put the uService Definitions. They tended to be spread all over my source tree. Since I was writing my application using sailsjs I wanted to follow the convention over configurability paradigm that they espouse in sails.

Here are some of the things that I tried.


  • api/workers directory - Using the sails_hook_publisher & sails_hook_subscriber
  • api/jobs directory - similar to the workers pattern but using grunt to run processes.
  • deploy directory - Using the micro npm module.

Workers


This method uses the sails_hook_publisher & sails_hook_subscriber plugins to give each instance the ability to subscribe to jobs that are are requested from another service. It assumes that you are using redis as the message queue. And it does not handle the management of starting/stopping or replicating services. It is a good solution but it had the overhead of a full sails application with each worker. It also tied the logical to the deployment models too tightly for me.

Jobs


Very similar to the publish/subscribe worker paradigm but I wanted a light weight mechanism for spinning up small services without all of the overhead of the sails stack. So I basically just fired up small node js scripts that I stored in the jobs directory. Problems with this is lack of flexibility of the micro-service architecture and coupling with the application code.

Deploy


Using the micro npm package to create simple micro services that can handle a HTTP request. I created simple micro services that performed specific tasks for the application. Creating the micro services was actually very simple thanks to the micro package. But Deploying multiple micro services can be hard manage. So I looked to docker and containers to help with this.

I had to come up with a strategy to define/code my microservices, how they would be managed and deployed. I had to remember the key software engineering principles of Cohesion, Decoupling and Reuse in my architecture. So the first thing I worked on was decoupling the microservice deployment from the microservice source code itself.

This gave me the flexibility to change my deployment architecture from source code itself. To do this I looked at defining my deployment architecture using docker both DockerFile and docker compose file formations. To define a microservice I had to do the following.


  • create a package.json file with all of the packages needed to run my microservice
  • create a Dockerfile to build the image of my microservice
  • add the microservice to a docker-compose file for the application.

package.json


The package.json file contains the npm packages that my microservice depends on as well as an scripts that are needed to manage my microservice including a build and deploy script. Note that when I build my microservice image I tag it with a local registry service using "localhost:5000/appName/userviceName" where appName is the name of the application and userviceName is the name of the microservice that I am creating. This is just an example of a naming convention that I like to use. If I was creating a microservice that I was going to use over and over again I would use a different name. The deploy target pushes the image into the local registry so I can use the image in the docker swarm that I am running.

 {
  "main": "index.js",
  "scripts": {
    "start": "micro",
    "build": "docker build . -t localhost:5000/appName/userviceName",
    "deploy": "docker push localhost:5000/appName/userviceName"
  },
  "dependencies": {
    "micro": "latest",
    "node-fetch": "latest"
  }

Dockerfile

The dockerfile in this case is very simple. I am writing all of my micro-services in node so I start with the base image. Next I simply copy the package.json file to an application directory and I copy any of the source code into the application directory. Then I call "npm install" this will install all of the packaging that are required by my micro-service for the image. Then the last statement launchs the microservice by calling "npm start".

FROM  node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD npm start

docker-compose.yaml

The docker-compose.yaml file contains the services and their deployment configurations for the application. For my application I have a simple web server that is the main microservice for my application. It is a sailsjs application. I try to always name my web interface micro-service "web". It is easy for me to find them later. Again in the file below I have appName as the name of the application. Also you can see the micro-service definition is runing 5 replicas and the image is the same one as defined in the Dockerfile above.

version: '3'
services:
  mongo:
    image: mongo
    expose:
         - 27017
    ports:
         - "27017:27017"
  appName:
    image: localhost:5000/appName/web
    expose:
      - 1337
    ports:
      - "1337:1337"
  userviceName:
    image: localhost:5000/appName/userviceName
    deploy:
            mode: replicated
            replicas: 5

Bouquet Generator implementation

I have created a sails generator to generate the directory heiarchy as well as simple micro-services that you can use as a starting point for your own micro-service application check out the documernation at https://github.com/madajaju/bouquet/tree/master/sails-generate-bouquet-uservice. Or you can install it by installing the node.
# npm install sails-generate-bouquet-uservice --save.


I hope this helps you with your journey to building your own sails application using micro-services.
For more information on the Bouquet sails generators check out my previous blog post at https://darrenpulsipher.blogspot.com/2017/05/resurrecting-bouquet_3.html.

DWP

Monday, August 21, 2017

Argument for Hybrid Cloud with Common Cloud Core

Current Cloud environment

Over the last five years there has been a fundamental shift in the IT environment. The continual growth of Public Cloud and the emergence of Private cloud options has left many CIOs and IT departments playing catchup. In the competitive digital economy of today, many development teams need to move faster than their IT departments can deliver, which has led many enterprise developers to turn to public clouds like AWS, GCE, and Azure to spin up new infrastructure resources on demand - no more waiting around for several levels of technical and business approvals, physical space in the data center, and vendor supply problems. Now, in a matter of minutes, a development team can have all of the infrastructure they need for their new project. While the consumerization of IT through the public cloud has helped developers move fast rather than wait for their IT departments to give them the resources they need, this “bring your own” server trend has resulted in the emergence of shadow IT – infrastructure not supported or known to the IT department. Shadow IT then raises its own concerns for the organization – namely security, cost management, data silos, and compliance concerns.


Before the Public Cloud, IT managers could easily walk around the cubes and count the number of local servers running under employees desks. With physical machines no longer visible to the IT departments, identifying teams and their project’s infrastructures is challenging. Many public cloud have given organizations the ability to consolidate accounting from all of the accounts for specific domains, but visibility into what is running and who is working on the infrastructure is still somewhat of a “snipe hunt”. Many times these rogue projects become visible when projects are productized and need to be put into a company’s on premises infrastructure. Security, privacy and regulatory policies can make productization of projects near impossible. Especially if developers have tightly coupled their applications to Cloud infrastructure.

Forward thinking IT departments are doing their best to capture shadow IT by working with Public Clouds and ISVs to create   company portals to the Cloud. Putting a pass-thru portal in place is a good start to capturing projects using infrastructure, but many organizations find that just a portal is leaving development teams wanting more. Over the last couple of years, I have been working with many of these organizations to identify use cases, architectures, and technologies to help develop these augmented portals which we called “Common Cloud Core (C3). Typically, three major technologies are integrated together to build these C3s. Cloud Management Platforms (CMP), Automation Frameworks, and Platform as a Service frameworks (PaaS).

Cloud Management Platform (CMP) 

Cloud Management Platforms primary responsibility is managing multiple heterogeneous cloud both
public and private. Giving end users the ability to manage multiple clouds and their infrastructure from one common pane of glass.  CMPs are typically opinionated with Cloud Administrators in mind. Although the Cloud Management Platform tools primary focus is managing multiple cloud, many tools have added features from the PAAS and Automation Frameworks or at minimum have a plugin architecture to support it.

Use Cases covered

Managing Public Clouds
Managing Private Clouds
Managing Cloud identities
Managing Infrastructure across multiple clouds.

Automation Frameworks 

Automation Frameworks primary responsibility is to
automate the deployment, management and upgrading software stacks on infrastructure. Automation Frameworks came out of the DevOps community and are typically focused on repeatable processes. Many of these tools include scripting languages that allow DevOps engineers to repeatability manage and configure software and services. Many DevOps teams are well versed in these tools.

Use Cases Covered

Deploy Software on Infrastructure
Manage Software on Infrastructure
Upgrade Software and Services


Platform as a Service (PAAS)

Platform as a Service is primarily responsible for giving a single
portal to re-use platforms and deploy them onto Infrastructure. PaaS tools are typically highly opinionated with the Developer in mind. Which can lead to inflexible infrastructure configurations. Many of these tools have a web portal that give developers the ability to select services and deploy them in the infrastructure.

Use Cases Covered

Deploy/Manage Services/Applications
Manage Service Catalog
Develop new Services/Applications


Convergence creates true Hybrid Clouds on C3)

Because not one tool set has all of the use cases they need to manage clouds, applications,
infrastructure and services, teams spend several “man years” installing, configuring, and integrating these three tool sets together. This has led to an emergence in technologies that integrate these tools including new product offerings, and new features in currently available products.
Many CMP products are including PaaS and Automation Frameworks into their solutions. PaaS tools are now managing multiple clouds. Automation Frameworks are beginning to offer web portals and connectivity to multiple clouds. Many of the tools are moving to the Unified Hybrid Cloud vision. When looking at which tool(s) to use it is important to remember the roots of the tool.


Deploying a solution

The Common Cloud Core ecosystem is still fairly new and still requires some heavy integrations between the tools. There are some tools that are starting to deliver complete out of the box solutions, but still with their particular vision of the world. Because the ecosystem is nascent there are many players and choices. Time will tell who will win this space. For now it will interesting to watch the tools converge and consolidate while the features mature.


DWP

Monday, July 17, 2017

Capitals, Delays and Nuclear Medicine

I had an opportunity to visit two nation's capitals this last week. Washington DC and Ottawa Canada. I have been to these cities before, but never in the same week.  The bustle of the two cities is very similar as everywhere you turn there is another government building, memorials, constant construction and lots of people from all over the world.

Washington DC

My flight from Sacramento to Washington DC went through Houston and was without incident except for the delay from Houston to BWI (Baltimore). About 30 minutes late.  That would not have been a big deal but I was already getting in at midnight. This caused a cascade of delays that put me at my hotel about 1:30 am. I had about 4 hours of sleep before I had to wake up and make my early morning meeting. The meeting went great and gave me the afternoon before I had to catch another flight to Ottawa. I looked at getting an earlier flight but to no avail. I had to wait until 9pm to catch my flight out of DCA (Reagan National).

So I had the afternoon to take a nap or walk around DC. Since I was tired and it was hot outside (95 degrees with 80% humidity) I was prepared to get a nap, but Paige convinced me to go see the new Spiderman movie. I walked over to the nearest movie theater and for an afternoon matinee I paid $15 to see the movie. At least it was in a nice air-conditioned theater. There were only 5 other people in the movie (it was the middle of the day on a Tuesday). The movie was great, but it was eerie with the theater being so quiet.

Ottawa

After my movie I hustled back to the hotel and drove my car to the airport. With plenty of time I was able to get some work done at the airport. I was also watching as my flight was delayed 10 minutes, then 30, 60 then back to 20. I knew I might get stuck in Washington DC for the night. Luckily I was only delayed 30 minutes and arrived in Ottawa at 11pm. 

Every time I go to Ottawa it is an adventure to get through immigration. They are in the process of changing their process and procedures to make getting through immigration quickly. One of the new changes is putting in self-service kiosks that make it easy to process people. Since these are new they are still working out the kinks in the system. I ended up being red flagged for additional screening. Joy.

After waiting 30 minutes to talk to an immigration officer, I was asked a series of questions about why I am visiting, if I have ever been arrested, who I live with, and what I was doing in Toronto in 1994. Wow, they remembered that I worked in Toronto for a year and that Dallin was born there. I quickly finished all of their questions and headed to the hotel. Another late night with a little bit of sleep for the next day. Luckily I didn't have a meeting first thing in the morning and had a chance to sleep in.

I had  several meetings over the next couple of days but had some time to see parts of Ottawa. Some
of the things that caught my eye was the Changing of the Guard. I had the chance to see this from one of the buildings along the street of the marching path. I had know idea that Canada had a changing of the guard much like what I have seen in England. It reminded my that the US and Canada both had roots from England. Canada embraced more of their roots, while the American Revolutionaries did everything they could to cast off the oppression of the Kind. Funny how much we are the same and different at the same time.

Coming Home

After two days of meetings I was anxious to get home. I went to the airport with time to get through immigration before I got on my airplane. This is a nice perk of leaving from Ottawa. I go through US immigration before I get on the airplane. Luckily no incidents.  When I got through immigration I heard my flight from Ottawa to New Jersey had been delayed. This meant that I was going to miss my connection to Sacramento. I went to the flight counter and they had already booked me on another flight that went through Chicago. I actually got me home 1 hour earlier. I was very happy about that.  I caught my flight after feasting on fries with poutine and bacon. I had to have fatty foods for my HIDA scan the next day. That is my excuse and I am sticking with it. 

Nuclear Medicine

After arriving home, I looked at my original flight from Jersey and saw that it was delayed until 3am in the morning. Boy am I glad I got on a different flight. This actually gave me time to irritate my gall bladder a little bit more. I stopped at In and Out on my way home and picked up a Double-Double at 11pm at night. You got to love that In and Out stays open until 1am.

The next morning Paige took me to the doctor to get my HIDA scan done to figure out what is going on with my gall bladder. The whole week I actually felt pretty good. I was anxious to find out what was really going on with my "gut".  A HIDA scan basically injects a radioactive isotope into your blood stream and then they give you drugs that mimic eating fatty foods filling up your gall bladder and then giving you more drugs to force your gall bladder to empty. They take pictures and movies watching the function of the gall bladder.

Good news is that there is nothing wrong with my gall bladder. Bad news is they don't know where the pain in my gut is coming from. That means more tests. We will have to wait and see. In the meantime I am actually feeling pretty good.

DWP