Yesterday I attended the AWS Summit in San Francisco. I wrote part 1 of my AWS Summit Review series yesterday, and this can be read by clicking here. That article focused on the feel of the conference and gave some details on the keynote. What I want to talk around in Part 2 is the technical sessions I attended, and what I felt about those.
Lets start with Session 1: Introduction to AWS
I selected this session, as I hardly know anything about AWS EC2. I created an account in November and deployed a few things but never used it in anger.
An interesting area I thought were that they provide different types of service that are optimised for the workload:
They are updating the hardware revisions, and have multiple hardware revisions, for example, compute1 (c1) cluster compute 1 (cc1) compute 2 (c2) etc
Something I didn’t know was they call VM’s Instances…
During the session the business reasons for why people are looking at cloud came up:
- Simplify your ops
- Scale as required
- Improve resiliency
- Run apps securely
- Run any app
- Reduce your costs
Now, this is nothing new to be honest. If anyone has attended one of mine or Chris Colotti’s sessions, we have been talking about this for years. It was good to see the business requirements resonate throughout the industry though, nice confirmation that we have been telling the right story.
One very interesting piece I noticed is that they talk about resiliency a lot, but its not resiliency provided by the service, its how you design and deploy your app for resiliency. You use availability zones, and build in load balancing, app HA etc. Now this actually surprised me quite a bit! If you look at any of the cloud providers within the VMware vCloud program, people like Colt, Virtacore, or of cource vCloud Hybrid Service, one of the fundamental service offerings is High Availability. This as you imagine is backed by vSphere HA. The proven and tested feature that most enterprises use today, so for AWS to not offer something so simple was a big big surprise to me.
One other piece that was said in the session is that to enable On-prem to Cloud VPN, you have to have Hardware VPN’s on-premise, and that customers don’t use software VPN’s on-prem! Hmmmm I beg to differ, lots of customers use software VPN’s, especially for Cloud services.
So on to Session 2: Backup and Archiving in the Cloud
They stated that they have the fastest NAS gateway service in the world!
Most of the talk was around S3 and Glacier as end points to backup vendors software. Of course they talked a lot about Glacier and the pricing model they have. Big focus on pricing, as with all the other storage cloud providers. Think Google Drive and Dropbox pricing wars! Price Price Price, as a consumer we love the pricing wars 🙂
I do have to say, I was extremely disappointed with this session. It was more of a partner sales session than a tech deep dive. Lots of Commvault do this, Symantec do that etc etc. Nice to see Veeam get a mention on the slides.
My final session of the day was Evolving VPC Design.
This was by far the most technical session of the three. Interestingly, I found some of this over complex compared to how this is done through vCloud Providers.
There was a massive focus on how you create networks in a VPC. When you create your networks, by default all of those networks route together. I don’t like this to be honest. I think all networks should be closed an non-routable until YOU make them routable. You have to create routing tables to configure services to talk across networks, much in the same way you create your on-premise networks.
Another key area that disappointed me is how you utilise NAT. Coming from vCloud Director and vCHS where the vCNS Edge Gateway is utilised, which provides NAT’ing out of the box, with AWS you have to deploy a NAT device, and the major surprise is its not HA enabled by default! To setup NAT in a HA pair, you have to create an autoscale group, and run scripts on the devices. This is cool and funky, but not very practical. Nice for a techie like me, but not great for your average consumer.
They then started talking about VPC Peering – This got a big round of applause from the audience! I had no clue what it actually was 🙂
VPC Peering, is the ability to connect a VPC to another VPC and pass traffic between them. You can pair multiple VPC’s together in a star topology. In the VMware community we would simply call this Cloud to Cloud connectivity.
Some limits to this – Soft Limit for VPC peering is 50, and the hard limit is 125. Also a big caveat, to peer connect your VPCs they must be in the same region. Interestingly the speaker warned us that you could get out of control with your VPC peering connections, and have it look something like this:
All in all it was a great day. I really enjoyed attending my first non-VMware conference, and found it extremely valuable to understand another cloud providers perspective outside of the VMware world.
One thing I do want to say, without risking my job and taking food away from the mouths of my children, is that as we all know, AWS have been at this for 8 years. They do have some really cool features that no one else offers today, S3 and Glacier auto tiering is one of those cool features, and I was really impressed with some of the stuff they can do today.
That being said, I do work for VMware, and want to give my perspective specifically on hybrid. After spending the day with Amazon, they are clearly talking about how to build Hybrid Clouds, and extending Enterprise on-premise data centers to AWS. They just don’t seem to do it very well, in my opinion. Yes you can create VPN tunnels and have workloads running in AWS and on-premise, but getting your workloads to AWS is a major headache. You cant easily do it, and worse still how would you move them back on-premise? Convert the instance in AWS to a vSphere VM? Its not going to happen quickly.
Thanks for reading.