miércoles, 3 de diciembre de 2014

Is the Operating System part of the IaaS (Infrastructure as a Service) in cloud computing?

I’ve recently debated about if Operating System (in a real Cloud Environment) is part, or not, of the IaaS, and therefore, if its control (management, monitoring and so on) is customer’s responsibility or provider’s.
 
On the one hand, according to the NIST definition of Cloud Computing (the most widely accepted, “The NIST Definition of Cloud Computing“, “Special Publication 800-145“) and quoting from it: “IaaS: “The capability provided to the consumer is to provision processing, storage, network and other fundamental computing resources where the consumer is able to able to deploy and run arbitrary software, wick can include operating system and applications. The consumer does not manage or control the underlying cloud infrastructure, bur has control over operating systems and deployed applications …”. So, puristically speaking, the Operating System is not part of IaaS, as it’s showed in the next picture emphasizing the control scope of the consumer and provider in an IaaS service:
 
IaaS-control scope of the consumer and provider

  
On the other hand, in the practice some Cloud Providers, in their IaaS provision dashboards let you chose the operating system (“image”) to deploy in the Virtual Machine (VM) you provision. So they are responsible of guaranteeing the Operating System “image” is good; so in some way they have a partial responsibility on the Operating System level (crossing the border of the IaaS) but it’s only in the first deployment of the operating system in the VM; after then the customer gets the control of the operating System so he’s full responsible of it and software built up or installed on. This other picture shows this fuzzy border for the initial step in the VM provisioning responsibilities:
 
IaaS-fuzzy border for the initial step in the VM provisioning responsibilities
 
Note, of course, other (most) IaaS cloud providers let you to upload you own Operating System images, so they are responsible for providing you the VM on the hypervisor  (or container) chosen by them, but nothing else, matching the purist definition of IaaS. Note: this is the case of Tissat, we offer wide catalogue of operating system images but our Cloud Platform (called Nefeles, and based on OpenStack) let customer to upload its own images too.
 
Besides the first picture, the next one shows the PaaS and SaaS scope control of consumer and provider according to NIST definition:
 
PaaS & SaaS-control scope of the consumer and provider
 
 
Finally, the border between IaaS, PaaS, and SaaS, can be summarized in the following picture:
 
IaaS, PaaS & SaaS-control scope of the consumer and provider-1
 
 
Or in a simplified way in this one:
 
IaaS, PaaS & SaaS-control scope of the consumer and provider-2

martes, 18 de noviembre de 2014

CloudSpaces project gets the OCEAN’s Quality Open Cloud label

CloudSpaces logo
 
I’m proud to announce the R&D Project CloudSpaces, partially funded by FP7 Programme of the European Union, has obtained the Quality Check of the OCEAN  project.
 
OCEAN Quality Check logo
   
As they say, only 20 over 74 open cloud projects in the OCD received an OCEAN Open Cloud label:
-  7 Open Cloud projects obtained the Quality Checked by OCEAN label 2014.
- 13 Open Cloud projects obtained the Reviewed by OCEAN label 2014.
 
The CloudSpaces project is been developed by a consortium integrated by 3 Universities: “Rovira i Virgili”, Eurecom (France), and Ecole Polytechnique Fédérale de Lausanne (Switzerland), and 3 Companies: EyeOs, NEC and, of course, TISSAT.
 
Besides, it should be remembered that CloudSpaces is the project where it’s been developed StackSync (a personal cloud software) that recently wined 3 “Software Libre 2014” open software awards (see my last post in 2014-nov-07).
 
Please let me partially quote its e-mail with more data about the OCEAN’s Open Cloud Quality label.
 
- – - – - – - – - – - – - – - – - – - – - – - – - – -

Dear Open Cloud Supporter,

I’m glad to inform you that your Cloud Project has been evaluated by the OCEAN project team (www.ocean-project.eu) and received the Quality Checked by OCEAN label.Quality_Checked_RGB_132x124
 
The OCEAN Open Cloud labels recognize innovative assets, new concepts, architecture documentation and/or re-usable open source cloud components described in the Open Cloud Directory (OCD).
 
The Open Cloud Directory 2014 brochure contains a short description of your project. Included links and QR codes give access to project details such as technologies, licenses, classification, and quality reports about submitted open source codes.
 
Your OCEAN Open Cloud label offers several dissemination opportunities:
1- Your project is listed in the Open Cloud Directory Brochure – to be distributed at OpenStack Summit Paris and upcoming cloud events.
2- You can use the attached OCEAN Open Cloud label and place it on your website and documentation with a direct link to the Open Cloud Directory : http://www.ocdirectory.org/
3- Do not hesitate to mention your OCEAN Open Cloud label on social networks, in your dissemination deliverables and Press Releases.
 
Only 20 over 74 open cloud projects in the OCD received an OCEAN Open Cloud label:
-  7 Open Cloud projects obtained the Quality Checked by OCEAN label 2014
- 13 Open Cloud projects obtained the Reviewed by OCEAN label 2014

On behalf of the entire OCEAN project team, congratulations to your consortium!

viernes, 7 de noviembre de 2014

StackSync has wined 3 “Software Libre 2014” awards


StackSync LogoCloudSpaces logo
    








I’m proud to announce that StackSync (http://stacksync.org/), an open-source scalable software (for personal clouds built on OpenStack) developed jointly by the “Rovira i Virgili” University (Tarragona, Spain) and the company Tissat, inside the CloudSpaces Project that is partially funded by European Commission (under the FP7 R&D Programme), has got 3 “Software Libre” (free software) awards in the 2014 call (its 6th edition). The “Software Libre” is an initiative of “PortalProgramas”.
StackSync has wined in the 3 categories it has competed:
- ESENCIAL PARA EMPRESAS (essential for companies)
- ESENCIAL PARA LA TECNOLOGÍA (essential for technology)
- MAYOR POTENCIAL DE CRECIMIENTO (bigger potential growing)
It that categories it competes against other famous software as, LibreOffice, Ubuntu, NetBeans, Gecos, QVD, and so on
More details here or clicking in the “GANADOR” word in the red label of the next award logos:
Ganador como Esencial para empresas en los Premios PortalProgramas al mejor software libre 2014
Ganador como Esencial para la tecnología en los Premios PortalProgramas al mejor software libre 2014
Ganador como Mayor potencial de crecimiento en los Premios PortalProgramas al mejor software libre 2014



martes, 28 de octubre de 2014

New “Cloud Security Services” by SVTCloud & Tissat

Tissat, empresa especializada en el outsourcing de servicios de misión crítica, ha suscrito un acuerdo estratégico con SVT Cloud Services, empresa de soluciones para el mundo empresarial en Internet, para poder ofrecer conjuntamente servicios de Cloud Security (Centro Global de Operaciones de Ciberseguridad) a empresas.
 
Cloud Security está ubicado en Walhalla, uno de los centros de proceso de datos más avanzados de Europa por su Arquitectura, Eficiencia Energética,  Seguridad y Calidad que permite ofrecer a sus clientes la confianza que ofrece un centro Tier IV, certificado por el Uptime Institute; además de estar avalado con certificaciones en Seguridad: ISO 27.001, Gestión de Servicios: ISO 20.000,  Gestión de calidad: ISO 9.001, Medio Ambiente: ISO 14.001, Eficiencia Energética: ISO 50.0001 y Sostenibilidad Energética: EA:0044 entre otras así como el Cumplimiento de los requisitos de Seguridad PCI-DSS para el alojamiento de entornos medios de pago.
 
Actualmente, las empresas se enfrentan a una gran cantidad de amenazas, (Virus, Troyanos, Botnets, Ataques de denegación de Servicio, Fugas de Información…)  que, a grandes rasgos, se podrían clasificar en: amenazas humanas y organizativas, amenazas hacia los activos de la organización, amenazas físicas y ambientales, amenazas ligadas a los recursos humanos, amenazas de Seguridad lógica y amenazas hacia continuidad de negocio.
 
En palabras de Nuria Lago, Directora de Seguridad y Calidad de Tissat, “es muy importante ser consciente de los riesgos de las empresas y estar preparados para mitigarlos en la medida de lo posible, y combatirlos en el caso de ser materializados, siendo de vital importancia la Seguridad Preventiva”.
 
Cloud Security dispone de personal especializado en Servicios de Seguridad; siendo capaz de prestar Servicios  Seguridad Perimetral gestionada para empresas, Centro de respuesta a Incidentes para empresas privadas (CERT), Auditorías de Seguridad bajo demanda (Audit as a Service), Sistemas de detección de Vulnerabilidades y Alerta Temprana, Formación on-line y  Asesoramiento, entre otros servicios disponibles en su Catálogo de Servicios de Seguridad, todos ellos disponibles también en modalidad cloud.
 
La clave de la excelencia de servicios de misión crítica es la mejora continua, disponer de tecnología puntera en Seguridad y contar con un equipo de profesionales especialistas en Seguridad (CISSP, ITIL, CISA, CISM, Auditores ISO, CEH …), algo que ya es posible gracias a la alianza estratégica de  Tissat  y SVT Cloud Services.
 
Cloud Security dispone de todas las salvaguardas necesarias para prestar sus servicios de Seguridad en modalidad “as a Service”, incluyendo, como valor añadido, Tecnología Patentada, Informes de actividad, Cuadros de Mando, etc.

lunes, 20 de octubre de 2014

“Juno” release of OpenStack has just been delivered

This post is only to remember that last Friday (October the 17th) the new version of OpenStack, named Juno, was released.
 
As Stefano Maffulli says in its e-mail to the OpenStack community, IT IS THE RESULT OF THE EFFORT OF 1.419 PERSONS, from 133 organizations, that contributed to its development. OpenStack Juno is tenth release of the open source software for building public, private, and hybrid clouds and it has 342 new features to support software development, big data analysis and application infrastructure at scale.
 
Let me make emphasis that in this new version, Sahara it’s completely integrated (it was in incubation in the previous vesion). Sahara is the Data Processing module based in Hadoop for Big Data processing suport, i.e. this module capabilities let automate provisioning and management of big data clusters using Hadoop and Spark. Big data analytics are a priority for many organizations and a popular use case for OpenStack, and this service lets OpenStack users provision needed resources more quickly.
 
Another significant advance is that the foundation for Network Functions Virtualization (NFV) has been consolidated in Juno, providing improved agility and efficiency in telco and service provider data centers.
 
Let me copy and mix from the Juno website and the Official Press Release for summarizing the main features (module by module):
  • Compute (Nova). Operational updates to Compute include improvements for rescue mode that enable booting from alternate images with the attachment of all local disks. Also, per-network settings are now allowed by improved nova-network code; scheduling updates to support scheduling services and extensibility; and internationalization updates. Key drivers were added such as bare metal as a service (Ironic) and Docker support through StackForge. Additional improvements were made to support scheduling and live upgrades.
  • Object Storage (Swift). Object Storage hit a major milestone this release cycle with the rollout of storage policies. Storage policies give users more control over cost and performance in terms of how they want to replicate and access data across different backends and geographical regions. Other new features include updated support for the Identity project (Keystone) and account to account copy feature rollout. Additional work on erasure coding within object storage continues and is expected sometime during the Kilo release cycle.
  • Block Storage (Cinder). Block Storage added ten new storage backends this release and improved testing on third-party storage systems. Cinder v2 API integration into Nova was also completed this cycle. The block storage project continues to mature each cycle building out core functionality with a consistent contributor base.
  • Networking (Neutron). Networking features support for IPv6 and better third-party driver testing to ensure consistency and reliability across network implementations. The release enables plug-ins for the back-end implementation of the OpenStack Networking API and blazes an initial path for migration from nova-network to Neutron. Supporting Layer 3 High Availability, the networking layer now allows a distributed operational mode.
  • Dashboard (Horizon). Dashboard rolled out the ability to deploy Apache Hadoop clusters in seconds, giving users the ability to rapidly scale data sets based on a set of custom parameters. Additional improvements include extending the RBAC system to support OpenStack projects Compute, Networking, and Orchestration.
  • Identity Service (Keystone). Federated authentication improvements allow users to access private and public OpenStack clouds with the same credentials. Keystone can be configured to use multiple identity backends, and integration with LDAP is much easier.
  • Orchestration (Heat). In Juno, it is easier to roll back a failed deployment and ensure thorough cleanup. Also, administrators can delegate resource creation privileges to non-administrative users. Other improvements included implementation of new resource types and improved scalability.
  • Telemetry (Ceilometer). Telemetry reported increases in performance this cycle as well as efficiency improvements including metering of some types of networking services such as load balancers, firewalls and VPNs as a service.
  • Database Service (Trove). The database service went through its second release cycle in Juno delivering new options for MySQL replication, Mongo clustering, Postgres, and Couchbase. A new capability included in Juno allows users to manage relational database services in an OpenStack environment.
  • Image Service (Glance). The Image Service introduced artifacts as a broader definition for images during Juno. Other key new features included asynchronous processing, a Metadata Definitions Catalog and restricted policies for downloading images.
  • Data Processing (Sahara). The new data processing capability automates provisioning and management of big data clusters using Hadoop and Spark. Big data analytics are a priority for many organizations and a popular use case for OpenStack, and this service lets OpenStack users provision needed resources more quickly.
 
In Tissat we’ve been testing the last beta versions and they look great, and we are starting to plan the migration IN LIVE.

martes, 26 de agosto de 2014

Yesterday VMware announced it own OpenStack distro

Due to last hard work weeks (even months) and further vacations period I’ve not been active in this blog, however the following news have made me change:
 
Yesterday VMware announced it own OpenStack distro.

Let me quote the original VMware’s advertisement (underlying, by my own criteria, some of the ideas):
 
Today (Aug 25, 2014) at VMworld 2014, VMware announced the following: VMware Integrated OpenStack
 
– VMware Integrated OpenStack is a new solution that will enable IT organizations to quickly and cost-effectively deliver developer-friendly OpenStack APIs and tools on top of their existing VMware infrastructure. The VMware Integrated OpenStack distribution will leverage VMware’s proven technologies for compute, network, storage and management to provide enterprise-class infrastructure that reduces CAPEX, operational expense (OPEX) and total cost of ownership for production-grade OpenStack deployments.
 
With the VMware Integrated OpenStack distribution, customers can quickly stand up a complete OpenStack cloud to provide API-driven infrastructure for internal developers, and to repatriate workloads from unmanageable and insecure public clouds. IT can manage and troubleshoot an OpenStack cloud with the same familiar VMware tools they already use every day, providing significant operational cost savings and faster time-to-value.
 
Read more about the forthcoming VMware Integrated OpenStack solution here.

Of course, VMware’s Integrated OpenStack doesn’t necessarily mean that VMware is pushing customers to use  OpenStack, as John Gilmartin, VP and GM of VMware’s software defined data center suite business unit, explained in an interview.

martes, 1 de julio de 2014

Tissat awarded as one of best “EU Code of Conduct for DataCentres” practicioners by the European Commission


I’m proud to announce that last month (May, the 28th) Tissat received the annual award of the European Commission to one of the best practitioners of the “EU Code of Conduct for DataCentres” for its DataCentre “Walhalla”, in Castellon, Spain.
 
 
This award is the results of the Research & Development activities and projects executed by Tissat in the DC energy efficiency arena: from ones partially funded by the Spanish Government Agencies (“Green DataCenter”, “CPD verde” or “RealCloud”) to other partially funded by the European Commission (as “CloudSpaces”).
 
Picture of the European Commission Award
Picture of the European Commission Award

 

jueves, 8 de mayo de 2014

Gathering R&D results: Inauguration of the NEW Management System of Emergencies (112) of “Region of Murcia”, based on TISSAT’s ECHO platform


Yesterday was officially inaugurated the new Management System of Emergencies (112 phone) of Region of Murcia (112-RM). It uses the ECHO Platform developed by Tissat; it’s the result of the last years R&D activities in this subject. It is important to bold that ECHO price is a quarter of the cost of other similar platforms.
 

Some links to this new could be found in different newspapers and media (in Spanish):


 

Recogiendo resultados de la I+D: inauguración oficial del Nuevo Sistema de Gestión de la Atención de Emergencias del 112-RM, basado en la Plataforma ECHO de Tissat

Ayer fue la inauguración oficial del Nuevo Sistema de Gestión de la Atención de Emergencias del 112 de la Región de Murcia, basado en la Plataforma ECHO (Emergencias Control Holístico Operativo) de Tissat, y resultado de sus proyectos de I+D+i: ECHO cuesta cuatro veces menos que la adquisición de otros productos desarrollados por empresas especializadas y los costes de mantenimiento son aproximadamente la mitad.

La noticia ha sido recogida en varios medios:

Yo por mi parte, me permito destacar lo siguiente:
  • El Consejero de Presidencia y Empleo, José Gabriel Ruiz, destaca que “es el primer centro de España capaz de integrarse con otras plataformas operativas de gestión de emergencias similares, como la de la Unidad Militar de Emergencias”
  • La Región de Murcia, la UME y la Armada realizan mañana un simulacro de respuesta ante una emergencia por incendio forestal en la Estación Naval de La Algameca.
  • La plataforma tecnológica ECHO ha sido desarrollada por TISSAT, y puede crecer según las necesidades del 1-1-2
  • ECHO cuesta cuatro veces menos que la adquisición de otros productos desarrollados por empresas especializadas y los costes de mantenimiento son aproximadamente la mitad.
 
 

domingo, 20 de abril de 2014

“Icehouse” release of OpenStack has just been delivered

This post is only to remember that, as foreseen, just a couple of days ago (Thursday, the 17th) the new version of OpenStack, named Icehouse, was released.
 
As Stefano Maffulli says in its e-mail to the OpenStack community, IT IS THE RESULT OF THE EFFORT OF 1.202 PERSONS, from 120 organizations, that contributed to its development.
 
Approximately 350 new features has been added (rolling upgrades, federated identity, tighter platform integration, etc) , but in my opinion the most significant is that “OpenStack Database Service” (Trove), which was incubated during the Havana release cycle, is now available.
 
Other programs still in incubation (already developing during Icehouse) are Sahara (OpenStack Data Processing, i.e. to provision a Hadoop cluster on OpenStack), Ironic (OpenStack Bare Metal as a Service), Marconi (OpenStack Messaging) and, and we hope they go live in the next release of OpenStack, code-named Juno, foreseen in 6 month.
 
In Tissat we have been testing the last beta versions and they look great, and we are starting to plan the migration IN LIVE.
 
Quoted from the the official press release these are the main features (module by module):
  • OpenStack Database Service (Trove): A new capability included in the integrated release allows users to manage relational database services in an OpenStack environment.
  • OpenStack Compute (Nova): New support for rolling upgrades minimizes the impact to running workloads during the upgrade process. Testing requirements for third-party drivers have become more stringent, and scheduler performance is improved. Other enhancements include improved boot process reliability across platform services, new features exposed to end users via API updates (e.g., target machines by affinity) and more efficient access to the data layer to improve performance, especially at scale.
  • OpenStack Object Storage (Swift): A major new feature is discoverability, which dramatically improves workflows and saves time by allowing users to ask any Object Storage cloud what capabilities are available via API call. A new replication process significantly improves performance, with the introduction of s-sync to more efficiently transport data.
  • OpenStack Block Storage (Cinder): Enhancements have been added for backend migration with tiered storage environments, allowing for performance management in heterogeneous environments. Mandatory testing for external drivers now ensures a consistent user experience across storage platforms, and fully distributed services improve scalability.
  • OpenStack Networking (Neutron): Tighter integration with OpenStack Compute improves performance of provisioning actions as well as consistency with bulk instance creation. Better functional testing for actions that require coordination between multiple services and third-party driver testing ensure consistency and reliability across network implementations.
  • OpenStack Identity Service (Keystone): First iteration of federated authentication is now supported allowing users to access private and public OpenStack clouds with the same credentials.
  • OpenStack Orchestration (Heat): Automated scaling of additional resources across the platform, including compute, storage and networking is now available. A new configuration API brings more lifecycle management for applications, and new capabilities are available to end-users that were previously limited to cloud administrators. Collaboration with OASIS resulted in the TOSCA Simple Profile in YAML v1.0, demonstrating how the feedback and expertise of hands-on OpenStack developers can dramatically improve the applicability of standards.
  • OpenStack Telemetry (Ceilometer): Improved access to metering data used for automated actions or billing / chargeback purposes.
  • OpenStack Dashboard (Horizon): Design is updated with new navigation and user experience improvements (e.g., in-line editing). The Dashboard is now available in 16 languages, including German, Serbian and Hindi added during this release cycle.
 
And these are other interesting links:

martes, 1 de abril de 2014

Different NTT and Peer 1 surveys reveal changes in cloud buying patterns after discovering NSA activities

About November, 2013, I wrote in this blog a 3 post series about one of the consequence of the NSA espionage, as the data disclosed by Snowden were been made public and analyzed; they were titled Personal Data Privacy & (Europe’s) Cloud Regulation: (I) The dilemma, (II) The privacy approach; and (III) Resignation?? . In these posts I made my own conclusions:
  • Finding the balance between Personal Data Privacy and Business Regulation is keyand it’s not easy to solve this dilemma, even harder when the business is around a technology like the Cloud where free movements of data is intrinsic a one of its advantages.
  • Data security must be improved (the use of strong encryption that can protect user data from all but the most intense decryption efforts).
  • Finally, another worrying reflection to made is that NSA has shown that it is also subjected to the same risks of Data Loss (it doesn’t matter the way) as any other Business, and Snowden is certainly not the only one who had access to those private data of other people
  • I understand better why (although very slowly) the European Commission wants to regulate more strictly about some related subjects, despite that measures may cause a negative impact in both business and innovation.

Now it’s been found that NSA activities changed cloud buying patterns, according to NTT and Peer 1 different surveys.
 
On one hand, a survey conducted by NTT communications (titled “NSA Aftershocks: How Snowden has changed IT decision-makers’ approach to the cloud”) shows the consequences of the NSA activity in the US mainly, but aslo in Canada, UK, France, Germany, Hong Kong. And, from my point of view, the main results found are:
  • Almost nine tenths of ICT decision-makers are changing their cloud buying behaviours in the wake of Edward Snowden’s cyber-surveillance allegations.
  • Only 5% of respondents believe location does not matter when it comes to storing company data
  • It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US
Please let me quote (and extract) this 9 point summary of report conducted by NTT communications, according to Press Release published by the same NTT:
1) 88% of ICT decision-makers are changing their cloud buying behaviour, with 38% amending their procurement conditions for cloud providers
2) Only 5% of respondents believe location does not matter when it comes to storing company data
3) 31% of ICT decision-makers are moving data to locations where the business knows it will be safe
4) 62% of those not currently using cloud feel the revelations have prevented them from moving their ICT into the Cloud
5) ICT decision-makers now prefer buying a cloud service which is located in their own region, especially EU respondents 97% and US respondents 92%
6) 52% are carrying out greater due diligence on cloud providers than ever before
7) 16% is delaying or cancelling contracts with cloud service providers
8) 84% feel they need more training on data protection laws
9) 82% of all ICT decision-makers globally agree with proposals by Angela Merkel for separating data networks
Note: The survey questioned 1,000 ICT decision makers on their approach to the Cloud, and took responses from decision-makers in France, Germany, Hong Kong, the UK and the US.
 
Besides, on the other hand, Peer 1 surveyed 300 companies about storing data in the US to analyze the effects of the NSA activuty (after Snowden revelations) and they found that (let me add to the previous NTT’s list):
10) It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US.
Note: See this DataCenter Dynamics news for more details.
 
Finally it’s well know the change of policy in Microsoft some months after Snowden scandal:
11) Microsoft allowed its foreign customers to move personal data stored on servers outside of the US in January following the scandal.
 
Coming back to the NTT Report, its Vice President of Product Strategy in Europe, Len Padilla, said the results show the NSA allegations have changed ICT decision-makers attitudes towards cloud computing and where data is stored.
He said decision makers, however, need to keep in mind the benefits that cloud can bring to business services. And he also adds:
“Despite the scandal and global security threat, business executives need to remember that cloud platforms do help firms become more agile, and do help foster technology innovation, even in the most risk-averse organizations”
“ICT decision-makers are working hard to find ways to retain those benefits and protect the organization against being compromised in any way. There is optimism that the industry can solve these issues through restricting data movement and encryption of data”.

miércoles, 12 de marzo de 2014

TISSAT RENUEVA CON ÉXITO LA CERTIFICACIÓN DE SU SISTEMA DE GESTIÓN INTEGRAL

La empresa sigue siendo pionera en certificación de normas, desde 2009, año en que se convirtió en la primera empresa española en certificarse en Gestión de Servicios, bajo el marco del estándar ISO 20.000
 
Tissat, 10 de marzo de 2014.-  Tissat renueva un año más su Sistema de Gestión Integral compuesto por las Normas Gestión de Calidad UNE-EN ISO 9001, Gestión de Servicios UNE-ISO/IEC ISO 20.000, Gestión de Seguridad UNE-ISO/IEC 27.001, Gestión Medio Ambiental UNE-EN ISO 14.001, Gestión Energética UNE-EN ISO 50.001,Gestión de I+D+i UNE 166.002 y Sostenibilidad Energética (SECPD EA:0044)
 
Estas normas garantizan tanto la Gestión Medioambiental como Energética de sus Datacenters, además de la Gestión de Seguridad y la Operación de Servicios Housing/Hosting, Conectividad, Almacenamiento y Copias de Seguridad, Contact Center, Correo Electrónico y  Servicio de Recuperación ante desastres (DRP) tanto en su Datacenter de Paterna, como en su Datacenter de Castellón, certificado éste último como TIER IV en diseño por el Uptime Institute dentro del marco TIA-94.
 
“El éxito final de certificaciones en normas tan dispares para una organización de outsourcing de misión crítica como Tissat, en conjunto con otras certificaciones que ya posee (CMMI L2, EU Code of Conduct on Data Centre)  la posiciona en una situación muy ventajosa y competitiva dentro del ámbito de las TIC, siendo Tissat una de las empresas con más certificaciones hasta la fecha”, señala Carmen García, Directora de Tissat Madrid.
 
Tissat ha apostado desde sus inicios por la innovación y la calidad. De hecho, en la actualidad sigue siendo pionera en la certificación de normas, desde el año 2009, en que se convirtió en la primera empresa española en certificarse en Gestión de Servicios, bajo el marco del estándar ISO 20.000; en 2012 se volvió a convertir en empresa de referencia al ser la primera en certificar ambos Datacenters bajo el Estándar de AENOR en Sostenibilidad Energética (SECPD EA:0044).

domingo, 26 de enero de 2014

Virtualization vs. Cloud Computing (III): business differences, plus other technical ones, and conclusions

Once again let me start reminding that this post is referred to the scope and context defined in my first post of this series (titled “Virtualization vs Cloud Computing (I): what are we going to compare?”), although at the end of this post we’ll widen it.
 
Besides, as summary, in the two previous posts we concluded:
  • Virtualization is an enabling technology for Cloud Computing, one of the building blocks used “generally” for building Cloud Computing solution (“generally” but not always, because nowadays Cloud is starting to use other technologies away from pure virtualized environments to offer  “Baremetal as a Services” …)
  • The services provided by “Cloud Computing Management Environment” and by a “Virtualization Management Environment” ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED: both the self service characteristic and location independence feature (in the sense of no location knowledge) are the main difference, but in some cases (depending on the platform) also massive scale out.
 
Going to the subject, another technological point of comparison is that Virtualization Management Environment (at the present almost none) does not offer to the user the knowledge in real time of how long it has been using the VM and other metrics of the service (“measured services” is a essential characteristic), or maybe the user can get that knowledge but not in a friendly way. The main reason for that is the different business model they were “initially” thought of:
  • Virtualization was born to take advantage of unused resources in a physical machine, solving several problems that appeared in scale-out servers: different physical characteristics for different cluster subsets (after one or more expansions), coarse granularity in the resource assignment that led to unused resources, security issues when applications from competing companies were run on the same physical machine. However, although virtualization allows to take advantage of unused resources in a secure way, in the practice it let to traditional DataCenter Service provider to pass (or add) from a (physical) “Hosting” business model to a “Virtual Hosting” model with lower prices, but the billing model was the same: in general the customer is billed in the same way as for physical hosting: a monthly fixed rate where the cost is proportional to the power (performances) of VM and the associated resources it has contracted, but disregarding of the real usage (the user does) of the virtual machine.
  • Cloud Computing was born to “allow” a real pay-per-use model. For this reason it’s as important the self-serving feature as the capability to turn on or off the VM whenever the customer wants, because (s)he doesn’t pay for the standstill period. About this subject, please note that the technological Cloud Computing concept only defines that the services must be metered (and that information must be continuously available for the customer), what allows the provider to bill for the real usage, but it’s not mandatory to do it.
  • Of course, both business models mentioned above were the two extremes of a broad market and represent the “pure” business models, but today there are several intermediate hybrid business models: for example, cloud computing environment based model that offer discounts if you contract for long fixed period or that offer lower price-per-hour if you pay a monthly fee (one of the Amazon options) or pure technological Virtualization Management Environment that offer pay-per-use business model, and so on. AMAZON (the great Cloud innovator) is a good example of that: for example, “Reserved Instances” give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly charge for that instance (there are three Reserved Instance types: Light, Medium, and Heavy Utilization Reserved Instances, that enable you to balance the amount you pay upfront with your effective hourly price), the also offer volume discounts or “Spot Instances”, and so on.
 
Finally, concerning to the comparison points in the initial (reduced) scope defined, new customers needs are emerging to deploy applications on your physical servers, as well as your virtual servers, but keeping all the cloud model advantages (and essential characteristics): that’s the case, for example, when your application requires physical servers, or your production environment is too performance sensitive to run in VMs. Actually you don’t need to have a virtualized environment to be considered a cloud environment: your “virtual” instance might be a “container” which is not virtualized but running on bare metal (just sharing it with other containers) or even running directly and using completely the bare metal: “containers”, as aforesaid, are considered by some authors as sort of virtualization; so let me present an example of the latter case: OpenStack is currently developing the new “Iron” module that it’ll provide “Baremetal as-a-Service”, so  it’ll be possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. Therefore, cloud technology is starting to use other technologies away from pure paravirtualized environments.
 
We initially limited the scope of this comparison to “compute as a resource”, but if we slightly widen that context to include (as usual) any computing related resources, i.e, storage, and communications resources, then new differences arise (depending on the solution that was used for building both the Cloud Management Environment and the Virtualization Management Environment):
  • Most (but not all) of Virtualization Management Environments offer only compute and block storage services, but usually they do not offer Object Storage as a Service; besides they use to offer “Storage Virtualization” (SV, i.e. capacity is separated from specific storage hardware resources) but don’t offer “Software Defined Storage” (SDS), that differs from the former (SV) because in the SDS not only capacity but also services are separated from the storage hardware.
  • Moreover, and almost none of them (Virtualization Management Environments) offers communications management as a Services. I mean not only virtual networks, but also main communications devices provided as a service: router, firewallls, load balancers and so on. Moreover, the “Software Defined Networking” (SDN) it’s a technology that, as far as I now, is been currently used only in Cloud Computing Environments where this kind of services are starting to be offered. Of course, some Virtualization Environments offer this kind of communication services, but not in a self-service way and where you can self-define your internal communications layout and structure, and so on, e.g. as shown in the next picture (taken from the topology designed by a customer using the TISSAT’s Cloud Platform mounted on OpenStack):
TISSAT's IaaS Cloud Platform
TISSAT’s IaaS Cloud Platform
 
 
At the end of this 3 post series, as summary, three conclusions:
  1. The technological concepts (virtualization and cloud computing) should not be confused with the pure business models they were initially intended for: virtual hosting (a fixed monthly rate lower that physical hosting) and pay-per-use (that some person call the Cloud Computing business model), respectively. And don’t forgot that at the present there are a lot of mixed business models disregarding the underlying technology.
  2. Both virtualization and cloud computing allow you to do more with the hardware you have by maximizing the utilization of your computing resources (and therefore, if you are contracting the service you can expect lower expenses). However, although currently there is an inevitable connection between them, since the former (virtualization) is “generally” used to implement the latter (cloud), this connection could be broken soon with the arising of new technologies and innovations, and they are not exactly the same: BOTH ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED (self service feature, no location knowledge, massive scale out, even metered service in some cases) and there are some technical differences between them. Additionally, depending on user’s needs one of them could be better or not: a lot of customers have enough with server virtualization, and it could even be the best solution for their needs; but in other cases cloud is the best solution for the customer’s needs, and no virtualization.
  3. Although still circumscribed to IaaS (i.e. forgetting PaaS and Saas), when we widen the comparison scope to include (as usual) any computing related resources, (not only compute but also storage and communications resources), then new differences arise since, for example, communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on, taking advantage of Software Defined Networking technology). Besides, another main difference is how the Storage as a Service is provided: in a Virtualization Environment use to be reduced to Block Storage, no including Object Storage (as Cloud Environments do), and provided as Storage Virtualization but not as Software Defined Storage.

Note: Please, let me add that  Tissat (the company I’m working for) is offering all this sort of real IaaS Cloud Services as well more traditional DataCenter services (such as housing, hosting, virtualized hosting, DRP, and so on) based on its Data Centers Federation (that includes Walhalla, a DC certified as Tier IV by The Uptime Institute)  and using different product and solutions (currently VmWare, OpenStack, and so on) and most of the ideas of this post series are extracted from that experience.

jueves, 16 de enero de 2014

Virtualization vs. Cloud Computing (II): more technological differences

First of all, it is quite important to remember that this post is referred to the scope and context defined in my last post (titled “Virtualization vs Cloud Computing (I): what are we going to compare?”), where I defined Virtualization and Cloud Computing concepts used for this comparison; those definition could be others, of course, but then the conclusions will be others too, so due to its importance let me summarize them in the following points (if you need a more detailed explanation, please read my previous post):
  • By “Virtualization” we are going to refer ONLY to the “Hardware virtualization”, i.e. the creation of a virtual machine (VM) that acts like a real computer with an operating system (quoted from wikipedia)
  • In Cloud Computing we´ll use the most clear and currently accepted Definition: the NIST one that says: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” and consequently,  disregarding the Service Model (IaaS, PaaS or Saas) and the Deployment Model (Private, Community, Public or Hybrid), states 5 Essential Characteristics for a CloudComputing service:
    • On-demand self-service,
    • Broad network access,
    • Resource pooling (multi-tenant),
    • Rapid elasticity,
    • Measured service.
  • Besides, to make the comparison possible we must focus ONLY in the Infrastructure as a Service (IaaS) and in this discussion we forget other cloud service models as PaaS and SaaS.
  • Finally, as in the virtualization concept, to easy the comparison we ONLY speak about the “compute resource” inside the IaaS Cloud Computing concept.
 
In this context,  in the last post we saw how Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing. Nevertheless, let me advance that at the end of this post will review and nuance this point because of both the recently arisen customer’s needs and the technological innovations.
 
Referring to the other essential characteristics defined by NIST for Cloud Computing, for example, “pure” Virtualization” itself does not provide the customer a self-service layer and without that layer you cannot deliver Compute as a Service, i.e. a self-serving model however is not an essential component of virtualization, as it is in cloud computing. The next picture (quadrant) shows, on one hand, the evolution of IT technology in the last 3 decades (roughly) starting in the top-left corner (maverick niche IT) when any company department provisioned itself any IT infrastructure it wanted (according to its selves criteria), to when the provision was controlled and the management was unified but any department had its own machines (bottom-left corner or “managed IT infrastructures”), to last decade when the IT were provisioned and managed by IT department and shared by all or several departments (using virtualization), to the current times of Cloud Computing; and on the other hand, it also shows how self-service is one of the differentiation between Virtualization and Cloud computing (Note: I’ve used this quadrant other times since I saw in some publication, but I cannot remember where, so I apologize but I cannot refer it; besides I recreate it, so something could be changed).
 
IT Evolution Cycle
Break: In the above picture doesn’t appear the older times (lasting more than a decade) of mainframes, a realm where IBM was the king. In such environments the provision and management was controlled, and the infrastructures were shared, so it would be placed the same corner that virtualization, i.e. in the bottom right corner.  The transition from mainframes to the free-riders’ ”maverick niche IT” was due to several factors but, in my opinion, two were more significant: on one hand, the commercial labor of IBM competitors, as Digital, that offer to universities very low cost computers (as the VAX and PDP series of Digital) achieving that the computer sciences graduates wanted similar computers in his hew jobs, and on the other hand a disrupting technology as the emerging of PC (by IBM); both of them, among others fostered and fueled del “selfie” spirit (please, let me use this modern buzzword with a different meaning: the wish to be self-sufficient, as natural in human being) that boosts the gradual transition to the self-service dedicated infrastructures (i.e. from the bottom-right corner to the top-left one); and a last thought about this point: certainly Internet rising as well communications and other enterprise needs also contributed to this transition but, in my opinion, once the movement was already started.
 
Coming back to the self-service essential characteristic, some Virtualization Management Environments  include a self-serving component (but it’s not mandatory) as well as features to allow the customer to know how much usage it has made (metered services) and the resources are elastically provisioned and released (rapid elasticity). Once again, all this features are mandatory in the Cloud but they are optional in a “Virtualization Management Environments” since they are not intrinsic in the virtualization technology. In fact, a “Virtualization Management Environments” will become a Cloud Computing Environment if it meets all the 5 NIST essential characteristics, an evolution that, for example, VMware has been following these years … Given that in the enterprise market, VMware’s (ESX hypervisor and vSphere) virtualization management environment is king, let me analyze little deeper this last subject as an good example of this point:
  • Although I’m a supporter of Open Source and therefore of OpenStack when speaking about Cloud, it must be recognized that VMware has a powerful suite of virtualization and cloud products. Concerning to this point of discussion, right now two products must be discerned: “vCenter” and “vCloud Director”:
  • On one hand, vCenter is what manages your vSphere virtual infrastructure hosts, virtual machines, and virtual resources (CPU, memory, storage, and network), i.e. a pure virtualization management environment.
  • On the other hand, vCloud Director (vCD) is at a higher level in the cloud infrastructure. It´s a software solution providing the interface, automation, and management feature set to allow enterprise and service providers to supply vSphere resources as a Web-based service, i.e. it takes advantages of vCenter to orchestrate the provisioning of Cloud resources by enabling self-service access to compute infrastructure through the abstraction of virtualized resources. In other words, it abstracts the virtualized resources to enable users to gain self-service access to them through a services catalogue. i.e. it provides the self-service portal that accepts user requests and translates them into tasks in the vSphere environment via the vCenter.
  • In summary, vCenter is required to administer your virtual infrastructure but it doesn’t create a cloud. The first piece required to create your cloud is vCloud Director. vCloud Director will talk to your vCenter server/servers but certain tasks will have to be done first in vCenter, such as creating a HA/DRS cluster, configuring the distributed virtual switch, adding hosts, etc.
  • Note: By the way, now that VMware has announced that it splits vCloud Director into vCenter and vCloud Automation Center (a product that is derived from VMware’s DynamicOps acquisition) and it also seems that capabilities like multi-tenancy management and self-provisioning would be pushed into vCloud Automation Center (vCAC), while constructs like the Virtual Data Center would fall into vCenter, everyone that relly wants a Cloud environment with VMware it shall to buy (or migrate) vCAC, a heavyweight software, much like an IT service management product, requiring deep integration with IT business processes and an ERP-like implementation scenario, since pure vCenter will keep lacking the cloud-like self-service feature.
 
However, THERE ARE STILL MORE DIFFERENCES because according to NIST (and it’s intrinsic to the Cloud Definition) the “Resource pooling (multi-tenant)” property implies a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter)”. However, as you know, most of the “Virtualization Management Platform” let you choose in what physical machine your VM is going to run, o let you move your VM from a physical machine to another exact physical machine (chosen by the customer). In fact, some customers want (even need) such features, and that is one of the points that let know if the customer really wants or needs and Virtualization Environment or a real Cloud Computing Environment (for example, If you listen to your customer that he wants to move by himself its VM from one physical machine to another, he’s not specifying a Cloud environment, but a Virtualization Management Environment). This no location knowledge is also applied a other features as High Availability (HA), Fault Tolerance (FT) and so on: for example, in a Cloud Management Environment you can specify as much as a different “infrastructure area” (for example a different DataCentre, or something like) for locating the stand-by VM, however in a pure “Virtualization Management Environment” you’re able to choose the specific host (physical machine).
 
Moreover, and associated with the previous idea of no location knowledge, Cloud is intrinsically thought to massive scale out, i.e. without limit, and for physically (possibly in different places) distributed resources, but much of virtualization management environments are intended to manage a number reasonable (maybe big, but not enormous) of physical machines (hosts) and in most cases in the same emplacement.
 
Finally, as advanced at the beginning of this post, new customers needs are emerging to deploy applications on physical servers, instead of  virtual servers, but keeping all the cloud model advantages (and essential characteristics): that’s the case, for example, when your application requires physical servers, or your production environment is too performance sensitive to run in VMs. Actually you don’t need to have a virtualized environment to be considered a cloud environment: your “virtual” instance might be a “container” which is not virtualized but running on bare metal (just sharing it with other containers) or even running directly and using completely the bare metal: “containers”, as aforesaid, are considered by some authors as sort of virtualization, so let me expose an example of the second case: OpenStack is currently developing the new “Iron” module that it’ll provide “Baremetal as-a-Service”, so  it’ll be possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. Therefore, cloud technology is starting to use other technologies away from pure paravirtualized environments.
 
So far, as consequence we’ve seen in the previous post and in the current one, we can conclude that:
  • Virtualization is an enabling technology for Cloud Computing, one of the building blocks used “generally” for building Cloud Computing solution (“generally” but not always, because nowadays Cloud is starting to use other technologies away from pure virtualized environments to offer  “Baremetal as a Services” …)
  • The services provided by “Cloud Computing Management Environment” and by a “Virtualization Management Environment” ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED:
    • the self service characteristic is mandatory in Cloud, and optional in a virtualization environment.
    • the location independence features (in the sense of no location knowledge) is intrinsically essential in Cloud, however most of virtualization environment lets know or operate with the location of VM.
    • massive scale out is also inherent to Cloud Environments, but much Virtualization Management Environment are only not prepared for manage “enormous” quantities of machines distributed in different emplacements.
 
Next post, I finalize this comparison focusing in a couple of technological differences:
  • The first one, “measured services”, in some way arise from the different business models that both, Virtualization and Cloud Computing, were INITIALLY intended for, and that will let me to compare those business models too.
  • For the second one, we will widen lightly the comparison scope to include (as usual) any computing related resources, (not only but also storage, and communications resources), and then we’ll analyze new differences: for example communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on).
 
Note: Please, let me add that Tissat (the company I’m working for) is offering real IaaS Cloud Services as well more traditional DataCenter services (as housing, hosting, virtualized hosting, DRP, and so on) based on its Data Centers Federation (that includes Walhalla, a DC certified as Tier IV by The Uptime Institute) and using different product and solutions (currently VmWare, OpenStack, and so on) and most of ideas of this post series are extracted from that experience.

domingo, 12 de enero de 2014

Virtualization vs Cloud Computing (I): what are we going to compare?

In this post series (compound of two more posts) my final intention is to clarify the differences between a “Cloud Computing Management Environment” and a “Virtualization Management Environment”. To achieve it, first of all, I should clarify the differences between Cloud Computing and Virtualization, two technological concepts that are frequently confused or mixed, but there are significant differences between them. Finally, another goal is to differentiate between Cloud Computing as technological concept, from the Cloud Computing as a business model: some people thinks of Cloud Computing is only a Business concept (I hope to show they are wrong) and other confuse the initial business model Cloud Computing was intended for with the technological concept: currently (and Amazon it’s the best exponent of it) there are a lot of different or mixed business model to explode the Cloud Computing Services.
 
The first question is: Is it the comparison possible or are we going to compare apples with oranges?  I think the comparison is possible but in the appropriate and well defined scope.
 
So, first we need to spend some paragraphs clarifying both concepts, because both of them (by different reasons) use to be interpreted in different ways by different persons. First of all, let me say that I don’t want to state that my definition are the correct ones (besides they are not mines, I choose the at least, the most widely accepted currently), but the comparison will be based on these definitions, and no others, in order to be able to focus the points.
 
On one hand, “Virtualization” is an abstraction process that as an IT technology concept that arose in 60s, according to Wikipedia, as a method of logically dividing the mainframes resources for different applications. However, in my opinion, its diffusion and source of actual meaning is due to Andrew S. Tanenbaum author of, a free Unix-like operating system for teaching purposes, and also author the several very famous and well-known books as “Structured Computer Organization” (first edited in 1976), “Computer Networks” (first edited in 1981), “Operating Systems: Design and Implementation” (first edited in 1987) Distributed Operating Systems (first edited in 1995) and some of them, evolved and updated are still used in the Universities around the world for example, last edition of some of them have been in 2010. It also was a famous for its debate with Linus Torvalds regarding kernel design of Linux (and Torvarlds also recognized that “Operating Systems: Design and Implementation” book and MINIX O.S, were the inspiration for the Linux kernel; well, by the way , as you probably have notices, I’m biased in this subject because I like a lot of its book, and I used them  a lot when I was an University teacher). Coming back to the point, last edition of “Structured Computer Organization” in US was in 2006, but the first one was in 1976 where he already introduced the concept of Operating System Virtualization, a concept that he spread along all its books in different contexts treated.
 
Currently, in the IT area, “virtualization” refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. And between all of these concepts, in this post we are going to refer ONLY to the “Hardware virtualization”, i.e. the creation of a virtual machine (VM) that acts like a real computer with an operating system. Software executed on these virtual machines (VM) is separated from the underlying hardware resources. For example, a computer that is running Linux may host a virtual machine that looks like a computer with the Windows operating system; and then Windows-based software can be run on the virtual machine (excerpted from Wikipedia).
 
On the other hand, “Cloud Computing” is a concept that arise from several previous concepts Probably, I share the opinion of  more experienced people, is a mixture of two previous ideas; the “Utililty Computing” paradigm (a packaging of computing resources, such as computation, storage and services, as a metered service and provisioned on demand as the Utilities companies do) the “Grid Computing” (a collection of distributed computer resources collaborating to reach a common goal: a well-known example was the SETI program). Currently as everybody knows, Cloud it’s also a hyping concept that it’s misused for a lot companies that state to offer (fake) Cloud Services, but there are also plenty of real Cloud Services Providers. Besides I think Cloud Computing is an open concept that could be redefined in coming years in function of the way Customers (companies, organizations or persons) use its services and demands new ones, providers imagine and develops new services and, also, technical Advances enable new ideas or services. But, currently there’s a some good and clear definitions and, probably, the most used and accepted is the one of NIST that says: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” and consequently,  disregarding the Service Model (IaaS, PaaS or Saas) and the Deployment Model (Private, Community, Public or Hybrid), states 5 Essential Characteristics for any CloudComputing service that I copy below (excerpted from NIST’s Cloud Definition) because it’s worth remembering them:
  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling (multi-tenant). The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
 
Besides, to make the comparison possible we must focus ONLY in the Infrastructure as a Service (IaaS), that is defined by the NIST as The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)” and in this discussion we forget other service models as PaaS and SaaS. Besides, as in the virtualization concept case, to easy the comparison we ONLY speak about the “compute resource” inside the IaaS Cloud Computing concept.

Inside this scope and context, virtualization is one of the technologies used to make a Cloud, mainly to supply (implement) the “resource pooling (multitenant)” characteristic (following the NIST definition). Besides, currently it is most important technology enabling such goal, but not the only one since other could be used, for example containers (although some people consider containers as sort of virtualization), or grid technologies (as the SETI program). Besides, other developments or software are needed to provide the remaining features required to be a real Cloud (NIST definition). Some authors consider “Orchestration” as what allows computing to be consumed as a utility and what separates cloud computing from virtualization. Orchestration is the combination of tools, processes and architecture that enable virtualization to be delivered as a service (quoted from this link). This architecture allows end-users to self-provision their own servers, applications and other resources. Virtualization itself allows companies to fully maximize the computing resources at its disposal but it still requires a system administrator to provision the virtual machine (VM) for the end-user. In other words, Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing.
 
However, in the next 2 posts of this series we’ll review this last point  because of both the recently arisen new needs of customers and the innovations and technological advances; i.e. the previous paragraph will be revisited, since cloud technology is currently starting to use other technologies away from pure virtualized environments (as containers or “baremetal as a Service”).
 
Besides, and what it’s more important, we also see that the differences between Cloud and Virtualization go beyond of the well-known and aforesaid “Virtualization is an enabling technology for Cloud, one of the building blocks used for Cloud Computing”. We will analyze more differences: self service feature, location independence (in the sense of no location knowledge), massive scale out, even metered service in some cases, and so on, and we will conclude that  BOTH ARE QUITE DIFFERENT IN HOW THE SERVICE IS PROVIDED (to be shown next week).
 
And let me advance that we’ll also differentiate between the two pure business models they were “initially” intended for: hosting virtual (a fixed monthly rate, but lower that physical hosting rate) and pay-per-use (that some person call the Cloud Computing business model, even the confuse the Cloud technology with the Cloud business model): some people confuse the technologic concepts with the business models;  besides it should be taken into account that at the present there are a lot of mixed or hybrid business models disregarding the underlying technology, what increases the confusion too.
 
Moreover, coming back to the technological arena, when we will widen lightly the comparison scope to include (as usual) any computing related resources, (not only but also storage, and communications resources), then new differences will arise as we’ll analyze in the third (last) post of this series: for example communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on).