Is anyone innovating the Cloud like Amazon?

I am a little worried that this blog is becoming a constant affirmation of Amazon and their Cloud services ūüėČ However it doesn’t seem that any other people are innovating like Amazon is. I am probably referring to Infrastructure as a Service (IaaS) vendors rather than the platform guys. With the announcement a couple of days ago of the ability to boot from EBS Amazon made another leap ahead.

This ability to boot from storage gives EC2 instances faster start up and also something that anyone who has used Amazon EC2 wanted –¬†persistence. Now if you want to put a server on ice (elastically contract) you can without losing configuration related data and installed software. This is not much of an issue for open source guys as they have Chef, Capistrano and other ways to automate and maintain configurations. However when it comes to Windows servers on EC2 this is a godsend.

I have used Amazon EC2 for hosting production applications and one of the other requirements you get quite regularly is the requirement to vertically scale. Booting from EBS ensures that you can upgrade a server instance from a Small to Large (or other variations) without the requirement to rebuild from scratch. Again not a big deal for people with a great toolset (Linux/Rails) but again a point of pain for people using Windows.

The ability to boot from snapshots is also welcomed. This aids in configuration management by allowing a snap to be taken of the entire machine, a patch applied and then if not successful the machine can be rebooted from the snap.

Over at the Rightscale blog there were a couple of additional takes on what boot from EBS provides. For example the ability to mount an existing EBS volume on a configuration server and update software etc. In addition they spoke about powering off dev/test environments to pocket some additional cash when they weren’t required.

Either way it is a killer feature and much needed for customers running production systems on Amazon EC2.

Azure Virtual Machine Role

I think everyone knew about it or suspected it but the ‘Azure VM Role’ was announced today at PDC. This functionality would allow you to run a full operating system instance and maintain administrative rights on the Azure platform. The scenario provided by Bob Muglia was that of choosing a base operating system instance, installing your software/other tools, snapshot the VM to a VHD and supply back to Azure to run on the fabric. I suspect this will be Windows only for quite a while but it will allow more complex/legacy applications to be hosted within Azure. Lastly as part of this full administrative access you would also be able to initiate a Terminal Services session (RDP) directly to the server.

Bob’s slide broke it down in 4 easy steps.

1. Select Base Window Server Image

2. Customize Virtual Machine Role

3. Snapshot Virtual Machine Image

4. Deploy Application and Target Your New VM Role.

Bridging the gap to the Cloud

In an earlier post I discussed why Amazon’s IPSEC support was important for Enterprises giving them the ability to treat the cloud provider as an elastic resource yet maintain security of data transfer.

At PDC today Bob Muglia (President of Server and Tools) discussed ‘Project Sydney’ as a way to bridge the gap between the cloud and on-premise equipment. In a demo he showed a web application running in Azure accessing a database that resided within the corporate network. He did state ‘IPSEC’ and in a further session at PDC Yousef Khalidi described that cloud components and on-premise servers exist in a virtual LAN that was secured.

From what I can see at this time it doesn’t sound like network-level IPSEC but rather server to server IPSEC however I could be proven wrong any time now. Regardless of the method it is a pretty powerful theme – allowing organisations to maintain some sensitive, high-performance or high security systems within their own Data Centres or hosting providers but still utilise Cloud providers.

This is also¬†subtly¬†different from the functionality Amazon released. The Amazon Virtual Private Cloud provides an IPSEC tunnel to a private instance of cloud servers that do not have public access. This meant you could use it to extend your data centre and host some components with Amazon BUT you couldn’t utilise this infrastructure for front-end Web services.

What Microsoft is delivering with Project Sydney much more focused on giving Web applications on Azure the ability to tunnel back to Data or other services located within the corporate network.

I will keep digging.

Can the Citrix marketing team count ?

While I am not one to normally care about vendor spam. I received what I thought was a really odd email from Citrix regarding their XenDesktop capabilities versus VMwares offering.

Whether they are right or wrong… the marketing attempt is quite silly. Check the Subject of the email !

The email

 

4things

Why call out 6 things, and then only identify 4 ?

I can’t identify where the 6 things were ?¬†¬†Turns out it isn’t me who can’t count.

Elastic Private Clouds are the new Black

I spent the night on Google Reader catching up on things I haven’t read for awhile and I strayed on a couple of posts related to Private Clouds and elasticity/scalability. One article even discussed that private clouds can aid in reducing OPEX. Check them out here and here.

“Private Clouds provide many of the benefits of the Public Cloud, namely elastic scalability, faster time-to-market and reduced OpEX, all within the Enterprises own perimeter that complies to its governance.” (Source)

HUH? Elastic, scalable and I am saving money? That’s the train I want to get on.

Private clouds are exactly that, funded by a company as part of their normal CAPEX. Eucalyptus is awesome and gives you *potential* to be elastic and *potential* to scale. Expand and contract to your hearts content but when you’re out of raw materials your spending cash – plain and simple. How are they reducing OPEX? Remember that Eucalyptus can’t even run Windows which would still dominate the usage within most Enterprises. Maybe it was related to licensing? Again I can’t see many Enterprises shifting VMWare out and moving Eucalyptus in to reduce licensing – why wouldn’t they just automate KVM builds? At least then they get Windows VM’s.

This is why ‘Private Clouds’ is a dubious term and wreaks of vendors quickly remodelling to sell their same old wares.

Unless your using the ‘cloud’ to abstract the components sitting latent within your Enterprise then it exhibits nothing that is cloud. You can always check my previous post for a definition. What I mean is if you were using Eucalyptus to redeploy commodity hardware, or you put terabytes of SATA drives into old server stock and linked it into Eucalyptus’ S3 compatible storage service (Walrus) then it might be classed as private cloud – otherwise it’s just virtualization.

Abstracting the raw materials within your data centre and achieving some elasticity big task and Eucalyptus can help get you there. Then I suppose your ‘governance’ must be so rigid that you wouldn’t look to Amazon or Rackspace to¬†fulfil¬†your requirements.

Amazon’s Virtual Private Cloud

Amazon has released their Virtual Private Clouds allowing people to extend their network and services to Amazon’s cloud services. Werner Vogels (Amazon’s CTO) describes VPC in his blog. The Virtual Private Cloud is particularly important because it opens the doors wide open for Enterprises to use cloud services. It gives them the ability to embrace cloud services without requiring the sophistication of abstracting their applications or re-writing them. Let me explain.

It seems the majority of people adopting Amazon’s services up until now have been web companies who saw Amazon’s Elastic, internet-facing infrastructure as a way to achieve scale without redundant internet connections, BGP, HA’d firewalls, load-balancers etc etc. It was internet-facing and this is exactly what they needed. However the Enterprise was left at the door. I believe even the most avant garde IT manager ruled Amazon out for a number of reasons.

 

  • Enterprises have applications that run on private networks. They are not internet facing.
  • Security is an issue and having a thin layer of firewall services is operating on the edge.
  • Anyone can attack your decision to host Enterprise services in the cloud using the classic FUD.
  • Integration of legacy systems or¬†interaction¬†with other Enterprise systems.

 

Amazon’s VPC addresses these concerns. No Longer are systems sitting on the internet they are sitting on a private network that can only be routed to and from the Enterprise network. Network access can be governed via corporate firewalls, visibility of networks can be governed via Enterprise routing policies. It reflects a paradigm that most IT managers already use for communication between primary and secondary data centres when WAN links fail – Internet VPNs.

You can now carve off some IP address space from your internal network, host it at Amazon and redistribute the route into your Enterprise network so Users and other IT systems can access, replicate, integrate etc. Impressive.

Amazon should have called VPC the ‘floodgate’ as it is now a real no-brainer for an Enterprise to start mass adoption of services from Amazon. I also predict that the early movers will be people replacing disaster recovery environments with DR environments hosted at Amazon. This would allow them to utilise their DR environment as a new production environment (to achieve greater scale) or reduce their operating costs.

Think of something as generic as file servers base on an Open Solaris ZFS file system. Snapshots can be created and replicated from the Enterprise to a system in located on a private network at Amazon. They no longer have to be encrypted and sent to a server located on the ‘public’ internet. They can be copied from a file server deep within the Enterprise to a network that no one has to know is located in the Cloud ūüėČ

What is Cloud?

You hear this quite a bit certainly from Infrastructure people – What is Cloud? How do you define Cloud?

It’s a really easy answer and I believe these series of headings are a good basis for it.

Elastic

Probably the first rule of cloud is elasticity in the sense of dealing with scalability to meet demand and also when demand falls away. Instead of procuring resources to meet the peaks but then operate at a lower mean ‘cloud’ enables people to grow and contract. This is obviously found in Amazon’s EC2, Azure and Google App Engine but also in Amazon’s Hadoop service.

If you aren’t elastic then you aren’t Cloud.

It’s Month-to-Month

A little weird as a second point but the thing I love about Cloud is it is month-to-month. There is no commitment beyond me paying for what I use. If you look at the Cloud as a deregulated resource market where commodity services are traded in short time frames and commitments are on a usage basis then you get my idea.

Cloud means less commitment and I believe this drives efficient use of resources. Squirrels store nuts for winter – people don’t.

Standard Interfaces using an API

Further to the point about deregulation how this was actually enacted (and done perfectly by Amazon) was via an API and a tool-set. They abstracted their resources, presented them via an API and then provided some tools to use the service. The ‘portal’ came later – initially it was a tool-set used via the command line.

There is beauty in this. The abstraction of the interface simplified the access to the resources. You can fire things up and shut things off. You could experiment, simulate, practice and then turn off an entire environment. Then you’re done.

Eventually Consistent

‘Eventually Consistent’ is one of my favourite themes and I was put onto it when CouchDB first released. Whether it is Big Data (datasets) or large amounts of file storage the Cloud is replicated and eventually consistent. I suppose what I like about this is that replication is performed as part of the Cloud and as a user I don’t really need to know about it. I also like the scale that eventually consistent architectures give you. You have a big application? How about you spread your data out across the globe and service the user out of their closest geo? Why stop there how about you replicate entire data sets (over time) and put the data and the files close to the user. CouchDB takes this a step further and allows topologies where you can maintain equipment within you’re organisation and still replicate to the Cloud.

Truly Massive

The Cloud exhibits truly massive behaviour and this is required to handle the exponential increase in data – files, databases, personalisation data etc. Hadoop and HBase are awesome examples of people resorting to Online (HBase) and Offline (Hadoop) methods of dealing with Truly Massive data. S3 is and Azure Blob Storage are examples a file level.

The Politics of The Sneaker Cloud Pimps

I think there is a real danger in people thinking that ‘Enterprise 2.0’ is the Cloud. You also see the muddying of the Cloud water with the term ‘Private Clouds’. So you re-badged your CIFS storage as a Private Cloud – well done you’re a winner. And virtualization isn’t ‘Cloud’ either (supports it Yes). Eucalyptus on the other hand is a truly awesome initiative that everyone should get behind. It took what Amazon had done and extended the nomenclature so that everyone could implement a Cloud.

I suppose what I am getting at is Cloud exhibits a series of characteristics that when they come together solve problems. You can’t take the file storage without the API or the replication (it’s Cloud!). You could do elastic virtualisation but how do I process this dataset in minutes rather than hours? Or how do I store a couple of terabytes of data using this elastically virtualised platform? That’s why I like Eucalyptus, it has the virtualization (XEN/KVM), the S3 compatible storage and also Elastic Block Storage.
Anyway that was my opening gambit so that I can get @cloudpimps off my back and say that I started blogging. My focus on this blog will be alot more on the kinds of characteristics in this post. Investigating the different areas of Cloud, adding to them, finding cool new ways to solve problems – and stay well and truly out of the Enterprise (it’s a dinosaur you know?) ūüėČ


Coming soon to a Thin Client near you… Wyse V10L (WTOS) multi-broker options!

A lot of customers that are evaluating VDI are probably looking at the Wyse (WTOS) V10L series of Thin Client.¬†Wyse have a kick-ass little unit in these devices, and I am sure their sales are ticking along, as the ease of management for these puppies is well…. easy!

Getting back on point, The V10L does have the odd limitation, and one is that it’s not using a full VMware View client, its their port / interpretation of what features they think customers will want the most. And thus, there was only an option for a single broker VDIBroker= <set in your WNOS.INI>.

While this is fine¬†for most customers, its not for those planning mass VDI deployments with multiple brokers internally. Brokers might be segregated by region, business unit, etc etc… and previously this would have put the poor lil V10L out of a large sale. Customers needed to use a VMware View client (Win32 or¬†Linux variants) to get a multi-broker capable client.

But…. Wyse are listening to their customers and partners ! and just a few weeks back after sending in a Feature Request for a multi-broker supported firmware, they sent me a working beta of exactly what is needed. While it was only limited to 5 brokers initially, this should get¬†upped shortly, and hopefully permanent baked into new V10L firmware releases. This is another tick in the box for the V10L, where it previously had an X.

There are a few more features coming to the V10L firmware that I’m happy about, but I’ll talk about them another day. But I guess what really pleases me is that Wyse take their feature requests seriously, and their turn around times from request to¬†private beta has been outstanding, sometimes in¬†less than 48hrs!

Lets hope Wyse keep up the development for these little units !

VMware View “Broken pipe” error

This is a quick FYI for those running VMware View Connection Servers(Brokers/Managers). Regardless of whether you are using ‘Direct’ or ‘Tunnelled’ connections you may see these messages in your logs similar to the ones below with sequential request numbers.

(Request112) Request failed: com.vmware.vdi.ob.tunnelservice.cx: Failed whilst returning body: java.io.IOException: Broken pipe

(Request111) Request failed: com.vmware.vdi.ob.tunnelservice.cx: Failed whilst returning body: java.io.IOException: Broken pipe

The error messages are generally caused by an ADC or Load Balancer that is polling the Connection Servers web server, and forcing a close of the client connection. This is not something that the View Connection server is expecting to handle, so it dumps this message to the logs.

Essentially, its a false-positive in most cases, and I have heard on good authority that the VMware developers will change the log messages to something a little more friendly in future (~View 4.0).

If you’re seeing loads of these errors and your not using a Load Balancer, then you could have some clients out there that are pushing load generator scripts at your brokers, or potentially some idiot trying out his new script kiddie util. Check out vAudit¬†from¬†Richard Garsthagens¬†site here —¬†http://www.run-virtual.com/?dl_id=1¬†if you want more detail around successful and unsuccessful user logins. Its a great utility for those running View 3.x environments.

 

Cheers.

Intro’s and what not…

Welcome to the 1st post from the Cloud Pimps…

Initially I’d like to start by covering off what we’re all about… and that is T&T. Tits¬†& Technology… yeah it may be¬†a bit sexist, but that was our mantra over a decade ago… and while it still has some element of truth, we lean more towards the technology side these days.

Going back in time, the two of us created a very successful site (girl4ruste.com) about the former topic I mention above which has now gone by the wayside – if you’re interested to see our efforts, check it out via the waybackmachine, most of the links are probably broken now, but anyway — that was the past. This site will be about the later topic, but you might hear about both intermixed in our posts depending on our mood.

We aim to cover many topics, but we are primarily focussed on Cloud computing, Virtualisation and all those sticky grey matter areas in between.

If you’re keen to follow us, you can catch us both on twitter: @cloudpimps & @cloudjunky !

 

Like a rhino…..