Mostrando entradas con la etiqueta RealCloud. Mostrar todas las entradas
Mostrando entradas con la etiqueta RealCloud. Mostrar todas las entradas

lunes, 19 de octubre de 2015

Liberty, the 12th release of OpenStack, came out last week

With 1,933 individual contributors and 164 organizations contributing to the release, Liberty offers finer-grained management controls, performance enhancements for large deployments and more powerful tools for managing new technologies such as containers in production environments ...
 
Quoting from its web-site:
“OpenStack Liberty, the 12th release of the open source software for building public, private, and hybrid clouds, offers unparalleled new functionality and enhancements. With the broadest support for popular data center technologies, OpenStack has become the integration engine for service providers and enterprises deploying cloud services.
With 1,933 individual contributors and 164 organizations contributing to the release, Liberty offers finer-grained management controls, performance enhancements for large deployments and more powerful tools for managing new technologies such as containers in production environments”
 
Here you can see a short video explanation:
https://www.youtube.com/watch?v=e7r2-p8Mki4?autoplay=1



And the press release is quoted below:
 

Newest OpenStack® Release Expands Services for Software-Defined Networking, Container Management and Large Deployments

AUSTIN, Texas // October 15, 2015 — Cloud builders, operators and users unwrap a lengthy wish list of new features and refinements today with the Liberty release of OpenStack, the 12th version of the most widely deployed open source software for building clouds. With the broadest support for popular data center technologies, OpenStack has become the integration engine for service providers and enterprises deploying cloud services.
 
Available for download today, OpenStack Liberty answers the requests of a diverse community of the software’s users, including finer-grained management controls, performance enhancements for large deployments and more powerful tools for managing new technologies like containers in production environment.
 
Enhanced Manageability
Finer-grained access controls and simpler management features debut in Liberty. New capabilities like common library adoption and better configuration management have been added in direct response to the requests of OpenStack cloud operators. The new version also adds role-based access control (RBAC) for the Heat orchestration and Neutron networking projects. These controls allow operators to fine tune security settings at all levels of network and orchestration functions and APIs.
 
Simplified Scalability
As the size and scope of production OpenStack deployments continue to grow—both public and private—users have asked for improved support for large deployments. In Liberty, these users gain performance and stability improvements that include the initial version of Nova Cells v2, which provides an updated model to support very large and multi-location compute deployments. Additionally, Liberty users will see improvements in the scalability and performance of the Horizon dashboard, Neutron networking Cinder block storage services and during upgrades to Nova’s compute services.
 
Extensibility to Support New Technologies
OpenStack is a single, open source platform for management of the three major cloud compute technologies; virtual machines, containers and bare metal instances. The software also is a favorite platform for organizations implementing NFV (network functions virtualization) services in their networking topologies. Liberty advances the software’s capabilities in both areas with new features like an extensible Nova compute scheduler, a network Quality of Service (QoS) framework and enhanced LBaaS (load balancing as a service).
 
Liberty also brings the first full release of the Magnum containers management project. Out of the gate, Magnum supports popular container cluster management tools Kubernetes, Mesos and Docker Swarm. Magnum makes it easier to adopt container technology by tying into existing OpenStack services such as Nova, Ironic and Neutron. Further improvements are planned with new project, Kuryr, which integrates directly with native container networking components such as libnetwork.
 
The Heat orchestration project adds dozens of new resources for management, automation and orchestration of the expanded capabilities in Liberty. Improvements in management and scale, including APIs to expose what resources and actions are available, all filtered by RBAC are included in the new release.
 
1,933 individuals across more than 164 organizations contributed to OpenStack Liberty through upstream code, reviews, documentation and internationalization efforts. The top code committers to the Liberty release were HP, Red Hat, Mirantis, IBM, Rackspace, Huawei, Intel, Cisco, VMware, and NEC.
 
Focus on Core Services with Optional Capabilities
During the Liberty release cycle, the community shifted the way it organizes and recognizes upstream projects, which became known by community members as the “big tent.” Ultimately, the change allows the community to focus on a smaller set of stable core services, while encouraging more innovation and choice in the broader upstream ecosystem.
The core services, available in every OpenStack-Powered product or public cloud, center around compute (virtualization and bare metal), storage (block and object) and networking.
New projects added in the last six months provide optional capabilities for container management (supporting Kubernetes, Mesos and Docker Swarm) with Magnum, network orchestration with Astara, container networking with Kuryr, billing with CloudKitty and a Community App Catalog populated with many popular application templates. These new services join already recognized projects to support big data analysis, database cluster management, orchestration and more.
 
Supporting Quotes
“Liberty is a milestone release because it underscores the ability of a global, diverse community to agree on technical decisions, amend project governance in response to maturing software and the voice of the marketplace, then build and ship software that gives users and operators what they need. All of this happens in an open community where anyone can participate, giving rise to an extensible platform built to embrace technologies that work today and those on the horizon.”
— Jonathan Bryce, executive director, OpenStack Foundation
 
“We use OpenStack because it delivers the core services we need in a production cloud platform that can extend to new technologies like containers. The ability to embrace emerging technologies as an open community rather than going solo is a primary reason why we’re sold on OpenStack.”
— Lachlan Evenson, cloud platform engineering, Lithium Technologies
 
“OpenStack has emerged as an increasingly capable and widely deployed open cloud technology. The companies using it successfully are those that have done their research, engaged with the project’s community and deployed in manageable stages. We expect OpenStack-based service providers will outgrow the overall IaaS service provider market through 2019.”
— Al Sadowski, research director, 451 Research
 
“Notable Fortune 100 enterprises like BMW, Disney, and Wal-Mart have irrefutably proven that OpenStack is viable for production environments. These are regular companies, not firms that were born digital like Etsy, Facebook, and Netflix. OpenStack’s presence in the market is now accelerating, leveraging the success of these pioneers.”
— Lauren Nelson, senior analyst, Forrester Research, as written in “OpenStack Is Ready — Are You?,” a May 2015 report from Forrester Research.

miércoles, 3 de diciembre de 2014

Is the Operating System part of the IaaS (Infrastructure as a Service) in cloud computing?

I’ve recently debated about if Operating System (in a real Cloud Environment) is part, or not, of the IaaS, and therefore, if its control (management, monitoring and so on) is customer’s responsibility or provider’s.
 
On the one hand, according to the NIST definition of Cloud Computing (the most widely accepted, “The NIST Definition of Cloud Computing“, “Special Publication 800-145“) and quoting from it: “IaaS: “The capability provided to the consumer is to provision processing, storage, network and other fundamental computing resources where the consumer is able to able to deploy and run arbitrary software, wick can include operating system and applications. The consumer does not manage or control the underlying cloud infrastructure, bur has control over operating systems and deployed applications …”. So, puristically speaking, the Operating System is not part of IaaS, as it’s showed in the next picture emphasizing the control scope of the consumer and provider in an IaaS service:
 
IaaS-control scope of the consumer and provider

  
On the other hand, in the practice some Cloud Providers, in their IaaS provision dashboards let you chose the operating system (“image”) to deploy in the Virtual Machine (VM) you provision. So they are responsible of guaranteeing the Operating System “image” is good; so in some way they have a partial responsibility on the Operating System level (crossing the border of the IaaS) but it’s only in the first deployment of the operating system in the VM; after then the customer gets the control of the operating System so he’s full responsible of it and software built up or installed on. This other picture shows this fuzzy border for the initial step in the VM provisioning responsibilities:
 
IaaS-fuzzy border for the initial step in the VM provisioning responsibilities
 
Note, of course, other (most) IaaS cloud providers let you to upload you own Operating System images, so they are responsible for providing you the VM on the hypervisor  (or container) chosen by them, but nothing else, matching the purist definition of IaaS. Note: this is the case of Tissat, we offer wide catalogue of operating system images but our Cloud Platform (called Nefeles, and based on OpenStack) let customer to upload its own images too.
 
Besides the first picture, the next one shows the PaaS and SaaS scope control of consumer and provider according to NIST definition:
 
PaaS & SaaS-control scope of the consumer and provider
 
 
Finally, the border between IaaS, PaaS, and SaaS, can be summarized in the following picture:
 
IaaS, PaaS & SaaS-control scope of the consumer and provider-1
 
 
Or in a simplified way in this one:
 
IaaS, PaaS & SaaS-control scope of the consumer and provider-2

lunes, 20 de octubre de 2014

“Juno” release of OpenStack has just been delivered

This post is only to remember that last Friday (October the 17th) the new version of OpenStack, named Juno, was released.
 
As Stefano Maffulli says in its e-mail to the OpenStack community, IT IS THE RESULT OF THE EFFORT OF 1.419 PERSONS, from 133 organizations, that contributed to its development. OpenStack Juno is tenth release of the open source software for building public, private, and hybrid clouds and it has 342 new features to support software development, big data analysis and application infrastructure at scale.
 
Let me make emphasis that in this new version, Sahara it’s completely integrated (it was in incubation in the previous vesion). Sahara is the Data Processing module based in Hadoop for Big Data processing suport, i.e. this module capabilities let automate provisioning and management of big data clusters using Hadoop and Spark. Big data analytics are a priority for many organizations and a popular use case for OpenStack, and this service lets OpenStack users provision needed resources more quickly.
 
Another significant advance is that the foundation for Network Functions Virtualization (NFV) has been consolidated in Juno, providing improved agility and efficiency in telco and service provider data centers.
 
Let me copy and mix from the Juno website and the Official Press Release for summarizing the main features (module by module):
  • Compute (Nova). Operational updates to Compute include improvements for rescue mode that enable booting from alternate images with the attachment of all local disks. Also, per-network settings are now allowed by improved nova-network code; scheduling updates to support scheduling services and extensibility; and internationalization updates. Key drivers were added such as bare metal as a service (Ironic) and Docker support through StackForge. Additional improvements were made to support scheduling and live upgrades.
  • Object Storage (Swift). Object Storage hit a major milestone this release cycle with the rollout of storage policies. Storage policies give users more control over cost and performance in terms of how they want to replicate and access data across different backends and geographical regions. Other new features include updated support for the Identity project (Keystone) and account to account copy feature rollout. Additional work on erasure coding within object storage continues and is expected sometime during the Kilo release cycle.
  • Block Storage (Cinder). Block Storage added ten new storage backends this release and improved testing on third-party storage systems. Cinder v2 API integration into Nova was also completed this cycle. The block storage project continues to mature each cycle building out core functionality with a consistent contributor base.
  • Networking (Neutron). Networking features support for IPv6 and better third-party driver testing to ensure consistency and reliability across network implementations. The release enables plug-ins for the back-end implementation of the OpenStack Networking API and blazes an initial path for migration from nova-network to Neutron. Supporting Layer 3 High Availability, the networking layer now allows a distributed operational mode.
  • Dashboard (Horizon). Dashboard rolled out the ability to deploy Apache Hadoop clusters in seconds, giving users the ability to rapidly scale data sets based on a set of custom parameters. Additional improvements include extending the RBAC system to support OpenStack projects Compute, Networking, and Orchestration.
  • Identity Service (Keystone). Federated authentication improvements allow users to access private and public OpenStack clouds with the same credentials. Keystone can be configured to use multiple identity backends, and integration with LDAP is much easier.
  • Orchestration (Heat). In Juno, it is easier to roll back a failed deployment and ensure thorough cleanup. Also, administrators can delegate resource creation privileges to non-administrative users. Other improvements included implementation of new resource types and improved scalability.
  • Telemetry (Ceilometer). Telemetry reported increases in performance this cycle as well as efficiency improvements including metering of some types of networking services such as load balancers, firewalls and VPNs as a service.
  • Database Service (Trove). The database service went through its second release cycle in Juno delivering new options for MySQL replication, Mongo clustering, Postgres, and Couchbase. A new capability included in Juno allows users to manage relational database services in an OpenStack environment.
  • Image Service (Glance). The Image Service introduced artifacts as a broader definition for images during Juno. Other key new features included asynchronous processing, a Metadata Definitions Catalog and restricted policies for downloading images.
  • Data Processing (Sahara). The new data processing capability automates provisioning and management of big data clusters using Hadoop and Spark. Big data analytics are a priority for many organizations and a popular use case for OpenStack, and this service lets OpenStack users provision needed resources more quickly.
 
In Tissat we’ve been testing the last beta versions and they look great, and we are starting to plan the migration IN LIVE.

martes, 1 de julio de 2014

Tissat awarded as one of best “EU Code of Conduct for DataCentres” practicioners by the European Commission


I’m proud to announce that last month (May, the 28th) Tissat received the annual award of the European Commission to one of the best practitioners of the “EU Code of Conduct for DataCentres” for its DataCentre “Walhalla”, in Castellon, Spain.
 
 
This award is the results of the Research & Development activities and projects executed by Tissat in the DC energy efficiency arena: from ones partially funded by the Spanish Government Agencies (“Green DataCenter”, “CPD verde” or “RealCloud”) to other partially funded by the European Commission (as “CloudSpaces”).
 
Picture of the European Commission Award
Picture of the European Commission Award

 

domingo, 20 de abril de 2014

“Icehouse” release of OpenStack has just been delivered

This post is only to remember that, as foreseen, just a couple of days ago (Thursday, the 17th) the new version of OpenStack, named Icehouse, was released.
 
As Stefano Maffulli says in its e-mail to the OpenStack community, IT IS THE RESULT OF THE EFFORT OF 1.202 PERSONS, from 120 organizations, that contributed to its development.
 
Approximately 350 new features has been added (rolling upgrades, federated identity, tighter platform integration, etc) , but in my opinion the most significant is that “OpenStack Database Service” (Trove), which was incubated during the Havana release cycle, is now available.
 
Other programs still in incubation (already developing during Icehouse) are Sahara (OpenStack Data Processing, i.e. to provision a Hadoop cluster on OpenStack), Ironic (OpenStack Bare Metal as a Service), Marconi (OpenStack Messaging) and, and we hope they go live in the next release of OpenStack, code-named Juno, foreseen in 6 month.
 
In Tissat we have been testing the last beta versions and they look great, and we are starting to plan the migration IN LIVE.
 
Quoted from the the official press release these are the main features (module by module):
  • OpenStack Database Service (Trove): A new capability included in the integrated release allows users to manage relational database services in an OpenStack environment.
  • OpenStack Compute (Nova): New support for rolling upgrades minimizes the impact to running workloads during the upgrade process. Testing requirements for third-party drivers have become more stringent, and scheduler performance is improved. Other enhancements include improved boot process reliability across platform services, new features exposed to end users via API updates (e.g., target machines by affinity) and more efficient access to the data layer to improve performance, especially at scale.
  • OpenStack Object Storage (Swift): A major new feature is discoverability, which dramatically improves workflows and saves time by allowing users to ask any Object Storage cloud what capabilities are available via API call. A new replication process significantly improves performance, with the introduction of s-sync to more efficiently transport data.
  • OpenStack Block Storage (Cinder): Enhancements have been added for backend migration with tiered storage environments, allowing for performance management in heterogeneous environments. Mandatory testing for external drivers now ensures a consistent user experience across storage platforms, and fully distributed services improve scalability.
  • OpenStack Networking (Neutron): Tighter integration with OpenStack Compute improves performance of provisioning actions as well as consistency with bulk instance creation. Better functional testing for actions that require coordination between multiple services and third-party driver testing ensure consistency and reliability across network implementations.
  • OpenStack Identity Service (Keystone): First iteration of federated authentication is now supported allowing users to access private and public OpenStack clouds with the same credentials.
  • OpenStack Orchestration (Heat): Automated scaling of additional resources across the platform, including compute, storage and networking is now available. A new configuration API brings more lifecycle management for applications, and new capabilities are available to end-users that were previously limited to cloud administrators. Collaboration with OASIS resulted in the TOSCA Simple Profile in YAML v1.0, demonstrating how the feedback and expertise of hands-on OpenStack developers can dramatically improve the applicability of standards.
  • OpenStack Telemetry (Ceilometer): Improved access to metering data used for automated actions or billing / chargeback purposes.
  • OpenStack Dashboard (Horizon): Design is updated with new navigation and user experience improvements (e.g., in-line editing). The Dashboard is now available in 16 languages, including German, Serbian and Hindi added during this release cycle.
 
And these are other interesting links:

martes, 1 de abril de 2014

Different NTT and Peer 1 surveys reveal changes in cloud buying patterns after discovering NSA activities

About November, 2013, I wrote in this blog a 3 post series about one of the consequence of the NSA espionage, as the data disclosed by Snowden were been made public and analyzed; they were titled Personal Data Privacy & (Europe’s) Cloud Regulation: (I) The dilemma, (II) The privacy approach; and (III) Resignation?? . In these posts I made my own conclusions:
  • Finding the balance between Personal Data Privacy and Business Regulation is keyand it’s not easy to solve this dilemma, even harder when the business is around a technology like the Cloud where free movements of data is intrinsic a one of its advantages.
  • Data security must be improved (the use of strong encryption that can protect user data from all but the most intense decryption efforts).
  • Finally, another worrying reflection to made is that NSA has shown that it is also subjected to the same risks of Data Loss (it doesn’t matter the way) as any other Business, and Snowden is certainly not the only one who had access to those private data of other people
  • I understand better why (although very slowly) the European Commission wants to regulate more strictly about some related subjects, despite that measures may cause a negative impact in both business and innovation.

Now it’s been found that NSA activities changed cloud buying patterns, according to NTT and Peer 1 different surveys.
 
On one hand, a survey conducted by NTT communications (titled “NSA Aftershocks: How Snowden has changed IT decision-makers’ approach to the cloud”) shows the consequences of the NSA activity in the US mainly, but aslo in Canada, UK, France, Germany, Hong Kong. And, from my point of view, the main results found are:
  • Almost nine tenths of ICT decision-makers are changing their cloud buying behaviours in the wake of Edward Snowden’s cyber-surveillance allegations.
  • Only 5% of respondents believe location does not matter when it comes to storing company data
  • It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US
Please let me quote (and extract) this 9 point summary of report conducted by NTT communications, according to Press Release published by the same NTT:
1) 88% of ICT decision-makers are changing their cloud buying behaviour, with 38% amending their procurement conditions for cloud providers
2) Only 5% of respondents believe location does not matter when it comes to storing company data
3) 31% of ICT decision-makers are moving data to locations where the business knows it will be safe
4) 62% of those not currently using cloud feel the revelations have prevented them from moving their ICT into the Cloud
5) ICT decision-makers now prefer buying a cloud service which is located in their own region, especially EU respondents 97% and US respondents 92%
6) 52% are carrying out greater due diligence on cloud providers than ever before
7) 16% is delaying or cancelling contracts with cloud service providers
8) 84% feel they need more training on data protection laws
9) 82% of all ICT decision-makers globally agree with proposals by Angela Merkel for separating data networks
Note: The survey questioned 1,000 ICT decision makers on their approach to the Cloud, and took responses from decision-makers in France, Germany, Hong Kong, the UK and the US.
 
Besides, on the other hand, Peer 1 surveyed 300 companies about storing data in the US to analyze the effects of the NSA activuty (after Snowden revelations) and they found that (let me add to the previous NTT’s list):
10) It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US.
Note: See this DataCenter Dynamics news for more details.
 
Finally it’s well know the change of policy in Microsoft some months after Snowden scandal:
11) Microsoft allowed its foreign customers to move personal data stored on servers outside of the US in January following the scandal.
 
Coming back to the NTT Report, its Vice President of Product Strategy in Europe, Len Padilla, said the results show the NSA allegations have changed ICT decision-makers attitudes towards cloud computing and where data is stored.
He said decision makers, however, need to keep in mind the benefits that cloud can bring to business services. And he also adds:
“Despite the scandal and global security threat, business executives need to remember that cloud platforms do help firms become more agile, and do help foster technology innovation, even in the most risk-averse organizations”
“ICT decision-makers are working hard to find ways to retain those benefits and protect the organization against being compromised in any way. There is optimism that the industry can solve these issues through restricting data movement and encryption of data”.

domingo, 26 de enero de 2014

Virtualization vs. Cloud Computing (III): business differences, plus other technical ones, and conclusions

Once again let me start reminding that this post is referred to the scope and context defined in my first post of this series (titled “Virtualization vs Cloud Computing (I): what are we going to compare?”), although at the end of this post we’ll widen it.
 
Besides, as summary, in the two previous posts we concluded:
  • Virtualization is an enabling technology for Cloud Computing, one of the building blocks used “generally” for building Cloud Computing solution (“generally” but not always, because nowadays Cloud is starting to use other technologies away from pure virtualized environments to offer  “Baremetal as a Services” …)
  • The services provided by “Cloud Computing Management Environment” and by a “Virtualization Management Environment” ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED: both the self service characteristic and location independence feature (in the sense of no location knowledge) are the main difference, but in some cases (depending on the platform) also massive scale out.
 
Going to the subject, another technological point of comparison is that Virtualization Management Environment (at the present almost none) does not offer to the user the knowledge in real time of how long it has been using the VM and other metrics of the service (“measured services” is a essential characteristic), or maybe the user can get that knowledge but not in a friendly way. The main reason for that is the different business model they were “initially” thought of:
  • Virtualization was born to take advantage of unused resources in a physical machine, solving several problems that appeared in scale-out servers: different physical characteristics for different cluster subsets (after one or more expansions), coarse granularity in the resource assignment that led to unused resources, security issues when applications from competing companies were run on the same physical machine. However, although virtualization allows to take advantage of unused resources in a secure way, in the practice it let to traditional DataCenter Service provider to pass (or add) from a (physical) “Hosting” business model to a “Virtual Hosting” model with lower prices, but the billing model was the same: in general the customer is billed in the same way as for physical hosting: a monthly fixed rate where the cost is proportional to the power (performances) of VM and the associated resources it has contracted, but disregarding of the real usage (the user does) of the virtual machine.
  • Cloud Computing was born to “allow” a real pay-per-use model. For this reason it’s as important the self-serving feature as the capability to turn on or off the VM whenever the customer wants, because (s)he doesn’t pay for the standstill period. About this subject, please note that the technological Cloud Computing concept only defines that the services must be metered (and that information must be continuously available for the customer), what allows the provider to bill for the real usage, but it’s not mandatory to do it.
  • Of course, both business models mentioned above were the two extremes of a broad market and represent the “pure” business models, but today there are several intermediate hybrid business models: for example, cloud computing environment based model that offer discounts if you contract for long fixed period or that offer lower price-per-hour if you pay a monthly fee (one of the Amazon options) or pure technological Virtualization Management Environment that offer pay-per-use business model, and so on. AMAZON (the great Cloud innovator) is a good example of that: for example, “Reserved Instances” give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly charge for that instance (there are three Reserved Instance types: Light, Medium, and Heavy Utilization Reserved Instances, that enable you to balance the amount you pay upfront with your effective hourly price), the also offer volume discounts or “Spot Instances”, and so on.
 
Finally, concerning to the comparison points in the initial (reduced) scope defined, new customers needs are emerging to deploy applications on your physical servers, as well as your virtual servers, but keeping all the cloud model advantages (and essential characteristics): that’s the case, for example, when your application requires physical servers, or your production environment is too performance sensitive to run in VMs. Actually you don’t need to have a virtualized environment to be considered a cloud environment: your “virtual” instance might be a “container” which is not virtualized but running on bare metal (just sharing it with other containers) or even running directly and using completely the bare metal: “containers”, as aforesaid, are considered by some authors as sort of virtualization; so let me present an example of the latter case: OpenStack is currently developing the new “Iron” module that it’ll provide “Baremetal as-a-Service”, so  it’ll be possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. Therefore, cloud technology is starting to use other technologies away from pure paravirtualized environments.
 
We initially limited the scope of this comparison to “compute as a resource”, but if we slightly widen that context to include (as usual) any computing related resources, i.e, storage, and communications resources, then new differences arise (depending on the solution that was used for building both the Cloud Management Environment and the Virtualization Management Environment):
  • Most (but not all) of Virtualization Management Environments offer only compute and block storage services, but usually they do not offer Object Storage as a Service; besides they use to offer “Storage Virtualization” (SV, i.e. capacity is separated from specific storage hardware resources) but don’t offer “Software Defined Storage” (SDS), that differs from the former (SV) because in the SDS not only capacity but also services are separated from the storage hardware.
  • Moreover, and almost none of them (Virtualization Management Environments) offers communications management as a Services. I mean not only virtual networks, but also main communications devices provided as a service: router, firewallls, load balancers and so on. Moreover, the “Software Defined Networking” (SDN) it’s a technology that, as far as I now, is been currently used only in Cloud Computing Environments where this kind of services are starting to be offered. Of course, some Virtualization Environments offer this kind of communication services, but not in a self-service way and where you can self-define your internal communications layout and structure, and so on, e.g. as shown in the next picture (taken from the topology designed by a customer using the TISSAT’s Cloud Platform mounted on OpenStack):
TISSAT's IaaS Cloud Platform
TISSAT’s IaaS Cloud Platform
 
 
At the end of this 3 post series, as summary, three conclusions:
  1. The technological concepts (virtualization and cloud computing) should not be confused with the pure business models they were initially intended for: virtual hosting (a fixed monthly rate lower that physical hosting) and pay-per-use (that some person call the Cloud Computing business model), respectively. And don’t forgot that at the present there are a lot of mixed business models disregarding the underlying technology.
  2. Both virtualization and cloud computing allow you to do more with the hardware you have by maximizing the utilization of your computing resources (and therefore, if you are contracting the service you can expect lower expenses). However, although currently there is an inevitable connection between them, since the former (virtualization) is “generally” used to implement the latter (cloud), this connection could be broken soon with the arising of new technologies and innovations, and they are not exactly the same: BOTH ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED (self service feature, no location knowledge, massive scale out, even metered service in some cases) and there are some technical differences between them. Additionally, depending on user’s needs one of them could be better or not: a lot of customers have enough with server virtualization, and it could even be the best solution for their needs; but in other cases cloud is the best solution for the customer’s needs, and no virtualization.
  3. Although still circumscribed to IaaS (i.e. forgetting PaaS and Saas), when we widen the comparison scope to include (as usual) any computing related resources, (not only compute but also storage and communications resources), then new differences arise since, for example, communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on, taking advantage of Software Defined Networking technology). Besides, another main difference is how the Storage as a Service is provided: in a Virtualization Environment use to be reduced to Block Storage, no including Object Storage (as Cloud Environments do), and provided as Storage Virtualization but not as Software Defined Storage.

Note: Please, let me add that  Tissat (the company I’m working for) is offering all this sort of real IaaS Cloud Services as well more traditional DataCenter services (such as housing, hosting, virtualized hosting, DRP, and so on) based on its Data Centers Federation (that includes Walhalla, a DC certified as Tier IV by The Uptime Institute)  and using different product and solutions (currently VmWare, OpenStack, and so on) and most of the ideas of this post series are extracted from that experience.

jueves, 16 de enero de 2014

Virtualization vs. Cloud Computing (II): more technological differences

First of all, it is quite important to remember that this post is referred to the scope and context defined in my last post (titled “Virtualization vs Cloud Computing (I): what are we going to compare?”), where I defined Virtualization and Cloud Computing concepts used for this comparison; those definition could be others, of course, but then the conclusions will be others too, so due to its importance let me summarize them in the following points (if you need a more detailed explanation, please read my previous post):
  • By “Virtualization” we are going to refer ONLY to the “Hardware virtualization”, i.e. the creation of a virtual machine (VM) that acts like a real computer with an operating system (quoted from wikipedia)
  • In Cloud Computing we´ll use the most clear and currently accepted Definition: the NIST one that says: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” and consequently,  disregarding the Service Model (IaaS, PaaS or Saas) and the Deployment Model (Private, Community, Public or Hybrid), states 5 Essential Characteristics for a CloudComputing service:
    • On-demand self-service,
    • Broad network access,
    • Resource pooling (multi-tenant),
    • Rapid elasticity,
    • Measured service.
  • Besides, to make the comparison possible we must focus ONLY in the Infrastructure as a Service (IaaS) and in this discussion we forget other cloud service models as PaaS and SaaS.
  • Finally, as in the virtualization concept, to easy the comparison we ONLY speak about the “compute resource” inside the IaaS Cloud Computing concept.
 
In this context,  in the last post we saw how Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing. Nevertheless, let me advance that at the end of this post will review and nuance this point because of both the recently arisen customer’s needs and the technological innovations.
 
Referring to the other essential characteristics defined by NIST for Cloud Computing, for example, “pure” Virtualization” itself does not provide the customer a self-service layer and without that layer you cannot deliver Compute as a Service, i.e. a self-serving model however is not an essential component of virtualization, as it is in cloud computing. The next picture (quadrant) shows, on one hand, the evolution of IT technology in the last 3 decades (roughly) starting in the top-left corner (maverick niche IT) when any company department provisioned itself any IT infrastructure it wanted (according to its selves criteria), to when the provision was controlled and the management was unified but any department had its own machines (bottom-left corner or “managed IT infrastructures”), to last decade when the IT were provisioned and managed by IT department and shared by all or several departments (using virtualization), to the current times of Cloud Computing; and on the other hand, it also shows how self-service is one of the differentiation between Virtualization and Cloud computing (Note: I’ve used this quadrant other times since I saw in some publication, but I cannot remember where, so I apologize but I cannot refer it; besides I recreate it, so something could be changed).
 
IT Evolution Cycle
Break: In the above picture doesn’t appear the older times (lasting more than a decade) of mainframes, a realm where IBM was the king. In such environments the provision and management was controlled, and the infrastructures were shared, so it would be placed the same corner that virtualization, i.e. in the bottom right corner.  The transition from mainframes to the free-riders’ ”maverick niche IT” was due to several factors but, in my opinion, two were more significant: on one hand, the commercial labor of IBM competitors, as Digital, that offer to universities very low cost computers (as the VAX and PDP series of Digital) achieving that the computer sciences graduates wanted similar computers in his hew jobs, and on the other hand a disrupting technology as the emerging of PC (by IBM); both of them, among others fostered and fueled del “selfie” spirit (please, let me use this modern buzzword with a different meaning: the wish to be self-sufficient, as natural in human being) that boosts the gradual transition to the self-service dedicated infrastructures (i.e. from the bottom-right corner to the top-left one); and a last thought about this point: certainly Internet rising as well communications and other enterprise needs also contributed to this transition but, in my opinion, once the movement was already started.
 
Coming back to the self-service essential characteristic, some Virtualization Management Environments  include a self-serving component (but it’s not mandatory) as well as features to allow the customer to know how much usage it has made (metered services) and the resources are elastically provisioned and released (rapid elasticity). Once again, all this features are mandatory in the Cloud but they are optional in a “Virtualization Management Environments” since they are not intrinsic in the virtualization technology. In fact, a “Virtualization Management Environments” will become a Cloud Computing Environment if it meets all the 5 NIST essential characteristics, an evolution that, for example, VMware has been following these years … Given that in the enterprise market, VMware’s (ESX hypervisor and vSphere) virtualization management environment is king, let me analyze little deeper this last subject as an good example of this point:
  • Although I’m a supporter of Open Source and therefore of OpenStack when speaking about Cloud, it must be recognized that VMware has a powerful suite of virtualization and cloud products. Concerning to this point of discussion, right now two products must be discerned: “vCenter” and “vCloud Director”:
  • On one hand, vCenter is what manages your vSphere virtual infrastructure hosts, virtual machines, and virtual resources (CPU, memory, storage, and network), i.e. a pure virtualization management environment.
  • On the other hand, vCloud Director (vCD) is at a higher level in the cloud infrastructure. It´s a software solution providing the interface, automation, and management feature set to allow enterprise and service providers to supply vSphere resources as a Web-based service, i.e. it takes advantages of vCenter to orchestrate the provisioning of Cloud resources by enabling self-service access to compute infrastructure through the abstraction of virtualized resources. In other words, it abstracts the virtualized resources to enable users to gain self-service access to them through a services catalogue. i.e. it provides the self-service portal that accepts user requests and translates them into tasks in the vSphere environment via the vCenter.
  • In summary, vCenter is required to administer your virtual infrastructure but it doesn’t create a cloud. The first piece required to create your cloud is vCloud Director. vCloud Director will talk to your vCenter server/servers but certain tasks will have to be done first in vCenter, such as creating a HA/DRS cluster, configuring the distributed virtual switch, adding hosts, etc.
  • Note: By the way, now that VMware has announced that it splits vCloud Director into vCenter and vCloud Automation Center (a product that is derived from VMware’s DynamicOps acquisition) and it also seems that capabilities like multi-tenancy management and self-provisioning would be pushed into vCloud Automation Center (vCAC), while constructs like the Virtual Data Center would fall into vCenter, everyone that relly wants a Cloud environment with VMware it shall to buy (or migrate) vCAC, a heavyweight software, much like an IT service management product, requiring deep integration with IT business processes and an ERP-like implementation scenario, since pure vCenter will keep lacking the cloud-like self-service feature.
 
However, THERE ARE STILL MORE DIFFERENCES because according to NIST (and it’s intrinsic to the Cloud Definition) the “Resource pooling (multi-tenant)” property implies a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter)”. However, as you know, most of the “Virtualization Management Platform” let you choose in what physical machine your VM is going to run, o let you move your VM from a physical machine to another exact physical machine (chosen by the customer). In fact, some customers want (even need) such features, and that is one of the points that let know if the customer really wants or needs and Virtualization Environment or a real Cloud Computing Environment (for example, If you listen to your customer that he wants to move by himself its VM from one physical machine to another, he’s not specifying a Cloud environment, but a Virtualization Management Environment). This no location knowledge is also applied a other features as High Availability (HA), Fault Tolerance (FT) and so on: for example, in a Cloud Management Environment you can specify as much as a different “infrastructure area” (for example a different DataCentre, or something like) for locating the stand-by VM, however in a pure “Virtualization Management Environment” you’re able to choose the specific host (physical machine).
 
Moreover, and associated with the previous idea of no location knowledge, Cloud is intrinsically thought to massive scale out, i.e. without limit, and for physically (possibly in different places) distributed resources, but much of virtualization management environments are intended to manage a number reasonable (maybe big, but not enormous) of physical machines (hosts) and in most cases in the same emplacement.
 
Finally, as advanced at the beginning of this post, new customers needs are emerging to deploy applications on physical servers, instead of  virtual servers, but keeping all the cloud model advantages (and essential characteristics): that’s the case, for example, when your application requires physical servers, or your production environment is too performance sensitive to run in VMs. Actually you don’t need to have a virtualized environment to be considered a cloud environment: your “virtual” instance might be a “container” which is not virtualized but running on bare metal (just sharing it with other containers) or even running directly and using completely the bare metal: “containers”, as aforesaid, are considered by some authors as sort of virtualization, so let me expose an example of the second case: OpenStack is currently developing the new “Iron” module that it’ll provide “Baremetal as-a-Service”, so  it’ll be possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. Therefore, cloud technology is starting to use other technologies away from pure paravirtualized environments.
 
So far, as consequence we’ve seen in the previous post and in the current one, we can conclude that:
  • Virtualization is an enabling technology for Cloud Computing, one of the building blocks used “generally” for building Cloud Computing solution (“generally” but not always, because nowadays Cloud is starting to use other technologies away from pure virtualized environments to offer  “Baremetal as a Services” …)
  • The services provided by “Cloud Computing Management Environment” and by a “Virtualization Management Environment” ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED:
    • the self service characteristic is mandatory in Cloud, and optional in a virtualization environment.
    • the location independence features (in the sense of no location knowledge) is intrinsically essential in Cloud, however most of virtualization environment lets know or operate with the location of VM.
    • massive scale out is also inherent to Cloud Environments, but much Virtualization Management Environment are only not prepared for manage “enormous” quantities of machines distributed in different emplacements.
 
Next post, I finalize this comparison focusing in a couple of technological differences:
  • The first one, “measured services”, in some way arise from the different business models that both, Virtualization and Cloud Computing, were INITIALLY intended for, and that will let me to compare those business models too.
  • For the second one, we will widen lightly the comparison scope to include (as usual) any computing related resources, (not only but also storage, and communications resources), and then we’ll analyze new differences: for example communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on).
 
Note: Please, let me add that Tissat (the company I’m working for) is offering real IaaS Cloud Services as well more traditional DataCenter services (as housing, hosting, virtualized hosting, DRP, and so on) based on its Data Centers Federation (that includes Walhalla, a DC certified as Tier IV by The Uptime Institute) and using different product and solutions (currently VmWare, OpenStack, and so on) and most of ideas of this post series are extracted from that experience.

domingo, 12 de enero de 2014

Virtualization vs Cloud Computing (I): what are we going to compare?

In this post series (compound of two more posts) my final intention is to clarify the differences between a “Cloud Computing Management Environment” and a “Virtualization Management Environment”. To achieve it, first of all, I should clarify the differences between Cloud Computing and Virtualization, two technological concepts that are frequently confused or mixed, but there are significant differences between them. Finally, another goal is to differentiate between Cloud Computing as technological concept, from the Cloud Computing as a business model: some people thinks of Cloud Computing is only a Business concept (I hope to show they are wrong) and other confuse the initial business model Cloud Computing was intended for with the technological concept: currently (and Amazon it’s the best exponent of it) there are a lot of different or mixed business model to explode the Cloud Computing Services.
 
The first question is: Is it the comparison possible or are we going to compare apples with oranges?  I think the comparison is possible but in the appropriate and well defined scope.
 
So, first we need to spend some paragraphs clarifying both concepts, because both of them (by different reasons) use to be interpreted in different ways by different persons. First of all, let me say that I don’t want to state that my definition are the correct ones (besides they are not mines, I choose the at least, the most widely accepted currently), but the comparison will be based on these definitions, and no others, in order to be able to focus the points.
 
On one hand, “Virtualization” is an abstraction process that as an IT technology concept that arose in 60s, according to Wikipedia, as a method of logically dividing the mainframes resources for different applications. However, in my opinion, its diffusion and source of actual meaning is due to Andrew S. Tanenbaum author of, a free Unix-like operating system for teaching purposes, and also author the several very famous and well-known books as “Structured Computer Organization” (first edited in 1976), “Computer Networks” (first edited in 1981), “Operating Systems: Design and Implementation” (first edited in 1987) Distributed Operating Systems (first edited in 1995) and some of them, evolved and updated are still used in the Universities around the world for example, last edition of some of them have been in 2010. It also was a famous for its debate with Linus Torvalds regarding kernel design of Linux (and Torvarlds also recognized that “Operating Systems: Design and Implementation” book and MINIX O.S, were the inspiration for the Linux kernel; well, by the way , as you probably have notices, I’m biased in this subject because I like a lot of its book, and I used them  a lot when I was an University teacher). Coming back to the point, last edition of “Structured Computer Organization” in US was in 2006, but the first one was in 1976 where he already introduced the concept of Operating System Virtualization, a concept that he spread along all its books in different contexts treated.
 
Currently, in the IT area, “virtualization” refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. And between all of these concepts, in this post we are going to refer ONLY to the “Hardware virtualization”, i.e. the creation of a virtual machine (VM) that acts like a real computer with an operating system. Software executed on these virtual machines (VM) is separated from the underlying hardware resources. For example, a computer that is running Linux may host a virtual machine that looks like a computer with the Windows operating system; and then Windows-based software can be run on the virtual machine (excerpted from Wikipedia).
 
On the other hand, “Cloud Computing” is a concept that arise from several previous concepts Probably, I share the opinion of  more experienced people, is a mixture of two previous ideas; the “Utililty Computing” paradigm (a packaging of computing resources, such as computation, storage and services, as a metered service and provisioned on demand as the Utilities companies do) the “Grid Computing” (a collection of distributed computer resources collaborating to reach a common goal: a well-known example was the SETI program). Currently as everybody knows, Cloud it’s also a hyping concept that it’s misused for a lot companies that state to offer (fake) Cloud Services, but there are also plenty of real Cloud Services Providers. Besides I think Cloud Computing is an open concept that could be redefined in coming years in function of the way Customers (companies, organizations or persons) use its services and demands new ones, providers imagine and develops new services and, also, technical Advances enable new ideas or services. But, currently there’s a some good and clear definitions and, probably, the most used and accepted is the one of NIST that says: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” and consequently,  disregarding the Service Model (IaaS, PaaS or Saas) and the Deployment Model (Private, Community, Public or Hybrid), states 5 Essential Characteristics for any CloudComputing service that I copy below (excerpted from NIST’s Cloud Definition) because it’s worth remembering them:
  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling (multi-tenant). The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
 
Besides, to make the comparison possible we must focus ONLY in the Infrastructure as a Service (IaaS), that is defined by the NIST as The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)” and in this discussion we forget other service models as PaaS and SaaS. Besides, as in the virtualization concept case, to easy the comparison we ONLY speak about the “compute resource” inside the IaaS Cloud Computing concept.

Inside this scope and context, virtualization is one of the technologies used to make a Cloud, mainly to supply (implement) the “resource pooling (multitenant)” characteristic (following the NIST definition). Besides, currently it is most important technology enabling such goal, but not the only one since other could be used, for example containers (although some people consider containers as sort of virtualization), or grid technologies (as the SETI program). Besides, other developments or software are needed to provide the remaining features required to be a real Cloud (NIST definition). Some authors consider “Orchestration” as what allows computing to be consumed as a utility and what separates cloud computing from virtualization. Orchestration is the combination of tools, processes and architecture that enable virtualization to be delivered as a service (quoted from this link). This architecture allows end-users to self-provision their own servers, applications and other resources. Virtualization itself allows companies to fully maximize the computing resources at its disposal but it still requires a system administrator to provision the virtual machine (VM) for the end-user. In other words, Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing.
 
However, in the next 2 posts of this series we’ll review this last point  because of both the recently arisen new needs of customers and the innovations and technological advances; i.e. the previous paragraph will be revisited, since cloud technology is currently starting to use other technologies away from pure virtualized environments (as containers or “baremetal as a Service”).
 
Besides, and what it’s more important, we also see that the differences between Cloud and Virtualization go beyond of the well-known and aforesaid “Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing”. We will analyze more differences: self service feature, location independence (in the sense of no location knowledge), massive scale out, even metered service in some cases, and so on, and we will conclude that  BOTH ARE QUITE DIFFERENT IN HOW THE SERVICE IS PROVIDED (to be shown next week).
 
And let me advance that we’ll also differentiate between the two pure business models they were “initially” intended for: hosting virtual (a fixed monthly rate, but lower that physical hosting rate) and pay-per-use (that some person call the Cloud Computing business model, even the confuse the Cloud technology with the Cloud business model): some people confuse the technologic concepts with the business models;  besides it should be taken into account that at the present there are a lot of mixed or hybrid business models disregarding the underlying technology, what increases the confusion too.
 
Moreover, coming back to the technological arena, when we will widen lightly the comparison scope to include (as usual) any computing related resources, (not only but also storage, and communications resources), then new differences will arise as we’ll analyze in the third (last) post of this series: for example communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on).

domingo, 15 de diciembre de 2013

OpenStack keeps gaining momentum in Cloud market, as evidenced by Oracle support, in spite of Gartner opinion

A few weeks ago, Gartner analyst Allessandro Perilli recently says the project has a long way to go before it’s truly an enterprise-grade platform. In fact, in a blog post he says that “despite marketing efforts by vendors and favorable press, enterprise adoption remains in the very earliest stages” … The main reasons for that, in its opinion, are:
  • Lack of clarity about what OpenStack does.
  • Lack of transparency about the business model.
  • Lack of differentiation.
  • Lack of pragmatism.

OpenStack backers rebuffed such claims, and I must recognize that I’m biased because I work in a European company (Tissat, based in Spain, and with several DataCentres and one of them -Walhalla- certified as Tier IV by the Uptime Institute) that offer IaaS Services using OpenStack. But I also have to recognize that OpenStack is a solution that is continuously evolving and growing, and therefore I agree with some of the statement of Mr.  Perilli, but I disagree with its main conclusion:

Maybe he’s right and the fact that big companies are contributing to its code as well as they also are supporting and using it to deliver services it’s unusual, but let me mention some of them that are supporting and using it: RackSpace and Nasa (maybe they aren’t the biggest, but they were the first ones), IBM (IBM’s open cloud architecture), HP (HP Cloud Services), Cisco (WebEx Service), they don’t seem small player, do they?  (I beg your pardon for the irony). Besides, relatively smaller companies are contributing to, supporting and selling services on OpenStack as the traditional Linux Distro Providers: RedHat, Novell (Suse), Canonical (Ubuntu). Finally other big player that are using OpenSack are PayPal-eBay, Yahoo, CERN (the European Organization for Nuclear Research), ComCast, MercadoLibre, Inc. (e-commerce services), San Diego Supercomputer Center, and so on., that aren’t small player either …

Could you think in player in the IT providers market as big as the firsts mentioned? Sure, Microsoft, Google, Oracle, … Well, surprise, last week Oracle announced that they embrace OpenSatck. Yes, although Oracle acquired Nimbula on March (and maybe the Nimbula shift from its own proprietary private cloud approach to become an OpenStack-compatible software supplier was the first sign of the change), they are going to integrates OpenStack cloud with its technologies: “Oracle Sponsors OpenStack Foundation; Offers Customers Ability To Use OpenStack To Manage Oracle Cloud Products and Services”. Oracle’s announcement said that:
  • Oracle Linux will include integrated OpenStack deployment capabilities.
  • Solaris too will get OpenStack deployment integrations
  • Oracle Compute Cloud and Oracle Storage Cloud services will be integrated with OpenStack
  • Likewise, Oracle ZS3 Series network attached storage, Axoim Storage Systems, and StorageTek Tape Systems will all get integrated.
  • Oracle Exalogic Elastic Cloud hardware for running applications will get its own OpenStack integration as well.
  • And so on.
i.e. Oracle speaks about a significant new support for OpenStack in an extremely ambitious manner, pretty much saying that it would support OpenStack as a management framework across an expansive list of Oracle products. Evidently, Oracle movement is a great support for OpenStacck (and for my thesis, too, and probably another point against Mr. Pirelli’s opinion) …

However, to be honest,  let me doubt (at the moment) about the ultimate motivations and objectives of Oracle: I’ve got the impression that Oracle is simply ceding to the pressing of the market, adjusting to the sign of the times, but it’s not committed to what makes OpenStack means: a collaborative and inclusive community: On one hand,  as I stated that in my  “Cloud Movements (2nd part): Oracle’s fight against itself (and the OpenStack role)”  post that Oracle is fighting against itself due to its traditional and profitable business model is challenged by the Cloud model, and it has been delaying its adoption as much as possible (as IBM did when its mainframes ran mission-critical applications on legacy databases, and a new -by then- generation of infrastructure vendors -DEC, HP, Sun, Microsoft and Oracle- challenged it and disrupted the old IBM model): it was conflicted about selling the lower-priced, lower-margin servers needed to run them (even Oracle CEO Larry Ellison used to disdain Cloud Computing, e.g. he called cloud computing “nonsense” in 2009). On the other hand, the recent Oracle announce doesn’t necessarily imply a change in this matter.

Besides the Oracle movement raise suspicions, even disbelief, not only in me but in other people. Let me quote some paragraphs of Randy Bias’ (co-founder and CEO of cloud software supplier CloudScalin post titled “Oracle Supports OpenStack: Lip Service Or Real Commitment?”. Randy’s position could be summarized in its words Oracle is the epitome of a traditional enterprise vendor and to have it announce this level of support for OpenStack is astonishing”. Randy also wonders “Can Oracle engage positively with the open-source meritocracy that OpenStack represents? Admittedly, at first blush it’s hard to be positive, given Oracle’s walled-garden culture.” And to back its answer, Randy review some Oracle facts:
  • Oracle essentially ended OpenSolaris as an open-source project, leaving third-party derivatives of OpenSolaris (such as those promulgated by Joyent and Nexenta) out in the cold, having to fork OpenSolaris to Illumos.
  • Similarly, the open-source community’s lack of trust can be seen ultimately in the forking of MySQL into MariaDB over concerns about Oracle’s support and direction of the MySQL project. Google moved to MariaDB, and all of the major Linux distributions are switching to it as well”.


However, finally Randy concludes: It’s hard not to have a certain amount of pessimism about Oracle’s announcement. However, I’m hopeful that this signals an understanding of the market realities and that its intentions are in the right place. We will know fairly soon how serious it is based on code contributions to OpenStack, which can be tracked at Stackalytics. (So far, there are zero commits from Oracle and only two from Nimbula, Oracle’s recent cloud software acquisition.). Personally, I’m happy to see Oracle join the party. It further validates the level of interest in OpenStack from the enterprise and reinforces that we’re all building a platform for the future”.

And the last words of Randy gets me back to my initial point: I really think OpenStack is already a mature enough platform to make business (in all the ways other IT products or solutions) as the giants and other big companies of IT area are showing (IBM, HP, Cisco, Oracle, RackSpace, Yahoo, PayPal, ComCast, RedHat, Novell, Canonical, etc.).

Finally, let me end this post with some partial pictures extracted from an Infographic elaborated by OpenStack (you can get the whole infographic here):

The current OpenStack deployment comprises 56 countries:
Current OpenStack Deployments

Covering any-size organizations and a wide range of industry sectors:
Current OpenSatck Organizations Size        Current OpenStack Industry Sectors

Besides, any type of deployments is currently made:
Current OpenStack Type of Deployments
And currently the 10 types of applications most deployed on OpenStack are:
Current OpenStack Type of Workloads

miércoles, 20 de noviembre de 2013

Analysis of several Cloud Adoption Studies: fostered by Customer Demand (and other drivers)

A 451 Research study reports that market for cloud services is growing rapidly (as showed in an Interxion study), predicting a CAGR (Compound Annual Growth Rate) of 24% from 2011 to 2015. Besides, that report compares the cloud market with the traditional hosting market. (Note: The hosting market consists of dedicated hosting and managed hosting; and to be able to make a comparison between cloud and the hosting market the study have chosen to leave out SaaS from Cloud market and reduce cloud computing to IaaS and PaaS). When comparing the cloud computing to the traditional hosting market, as showed in the next figure, we see that although cloud computing share is growing rapidly (CAGR of 42%), with a total value of $4.8bn in 2012, it is still a relatively small share (18%) compared to the hosting market:
 
Hosting vs Cloud-by-Interxion
 
The above table shows how the provision of infrastructure services is still dominated by hosting providers offering traditional hosting services, but cloud growing figures let foresee that cloud-based technologies will start to overtake the traditional market.
 
Besides, IT spends are shifting from traditional IT services from Cloud Services:
 
Traditional IT Services spends vs Cloud Services spends-by-Aerohive Networks
 
 
Besides, according to a study developed by PC-Connections, 69% of organizations are considering implementing Cloud or already have some application in the cloud:
 
Cloud Implementation-by-PC Connection
 
And 50% of organizations have assessed its environments to determine if they are suitable for Cloud:
 
Assessment of your IT environemt for Cloud-by-PC Connection
 
 
Therefore, let’s go deep in the Cloud Market, and we’ll review its current drivers and barriers; given that in the last year I‘ve treating direct or indirectly different sorts of Cloud barrier in several posts (“Cloud Computing Countries Ranking, or the Cloud Confusion even among market analyses: BSA vs Gartner vs IDC, Interoperability: a key feature to ask your Cloud Service Provider for, An infographic about Security and other Cloud Barriers, Cloud Computing and the EU Digital Agenda: A step in the right way, but too shortand so forth) let me focus in the drivers, and we’ll do it extracting data (as done in the prologue of this post) of several cloud market reports, that in some cases show different numeric results, even contradictory, but that they agree in the main: the relationship and importance of the main drivers.
 
According to the third annual Data Center Industry Survey” (2013) of The Uptime Institute, there are a lot of factors driving public cloud adoption, from speed of deployment, scalability and potential cost savings. But the breakout driver for cloud computing adoption in 2013 is end-user or customer demand. In 2012, only 13% of respondents listed customer demand as a top driver, versus 43% in 2013, making it the leading driver over all other factors driving public cloud deployments:
 
Top cloud drivers-The Uptime Insitute
 
 
Similar results are obtained by the study conducted by Orange, although the figures (and consequently the ranking) are different:
 
Top reasons for Implementing Cloud-by-Orange
 
 
Referring to this subject, a study conducted by Forrester Research (in behalf of IBM) shows that the 2 top applications that companies are interested in moving to the Cloud are clearly driven by the user: both the external customer and the internal employees:
 
Top 2 Types of applications to host on Cloud-by-Forrester
  
 
In consequence all of them agree on:

the importance of cost reduction as well as the growing importance of focusing in the customer demands as key drivers.

 
Besides, curiously, there are gradual changes in the way enterprises procure technology. With or without the blessing of IT, departmental and line-of-business managers are increasingly going direct to providers for SaaS apps or IaaS offerings.  In fact, Gartner forecasts that the CFO (Chief Financial Officer) will spend as much on technology as the CIO (Chief Information Officer) by 2017. A lot of that investment will go to customer-facing “systems of engagement”, mainly relating to e-commerce, which needs cloud infrastructure to scale properly and meet the highly variable demands of public Web and mobile apps.
 
 
Finally, directly in relation with the last sentence, according to an Aerohive Networks infographic the 3 main advantages of using Cloud Services are:
  1. Instant Scalability
  2. Fast Deployment
  3. Automated backup & updates
top advantages of Cloud Services-by-Aerohive Networks
 
 
And clearly the first two of them are aligned (and the third too) with the increasing user demand which forecast is based on, at least, the next points:
  • 90% of organizations will support corporate applications on personal devices by the end of 2014  (Gartner, “Plan Now for the Hyperconverged Enterprise Network”, May 2012) ,
  • 1,04 billions of smartphones and tablets will be shipped in 2014  overcoming for first time the number of normal mobiles (IDC, Worldwide Quarterly Mobile Phone Tracker,)
  • and Morgan & Stanley estimates that the mobile web will be bigger that desktop internet by the end of 2015.

90% of organizations will support corporate applications on personal devices