miércoles, 30 de enero de 2013

Interoperability and an Open IaaS Platform Comparison: OpenStack and others vs. Amazon

In my last post I stated that the use of open and “interoperable” Cloud Platform is a key feature (even a must) to ask your Cloud Service Provider for, because it prevents (or, at least, makes difficult) from falling captive of a vendor (vendor lock-in), i.e. you’ll be able to easily move your applications from one an IaaS provider to another interoperable one, without having to alter your own Cloud designed programs. You can even download a copy of open Platform to run inside your own data center as well, which (in principle) makes it feasible to move computing jobs from a private data center to commercial clouds and back again, at will: Moreover, with that approach, hybrid cloud construction becomes easier.

This is still early days in the Cloud, so we are likely to see a lot of different approaches with varying degrees of traction. Using Adrian Otto’s words, over time those should converge toward what the natural solution is for things, i.e. none internet backbone companies today discuss what dynamic routing protocols to use for exchanging IP traffic: simply all of them use BGP to exchange routes and pass traffic to each other. It can be expected that Cloud will follow a similar pattern (of course Cloud is more complex than TCP/IP network routing, but the same general principle should finally prevail).
In reference to standardization (the best, but difficult, way to interoperability) a lot of initiatives are arising, focusing on the different Cloud subjects, for example:
  • The ODCA (Open Data Center Alliance, compound of includes more than 300 companies spanning multiple continents and industries) that is working actively to shape the future of cloud computing – according to its mission, a future based on open and interoperable standards,
  • or CSA (Cloud Security Alliance) promotes the use of best practices for providing security assurance within Cloud Computing, working closely with standardization organizations,
  • or OASIS just formed TOSCA, the Technical Committee to Advance Open Standard for Cloud Portability (integrated by CA Technologies, Cap Gemini, Cisco, Citrix, EMC, IBM, Red Hat, SAP, Software AG, and others) to collaborate on Standard for Deploying Interoperable Cloud Applications (i.e., for managing workloads and dev-ops),
  • or “The Cloud Standards Customer Council” (that has been involved in standards creation from the user perspective),
  • or other interesting initiatives,
  • or, at last because it the one that I want to underline, OCCI (the Open Cloud Computing Interface) that comprises a set of open community-lead specifications delivered through the Open Grid Forum. OCCI is a Protocol and API for all kinds of Management tasks, currently supported by OpenStack, OpenNebula and Eucaliptus, JCloud and others. OCCI was originally initiated to create a remote management API for IaaS model based Services, allowing for the development of interoperable tools for common tasks including deployment, autonomic scaling and monitoring. It has since evolved into a flexible API with a strong focus on integration, portability, interoperability and innovation while still offering a high degree of extensibility. The current release of the Open Cloud Computing Interface is suitable to serve many other models in addition to IaaS, including e.g. PaaS and SaaS.
However, the results of these initiatives and works are still to come, and at the moment there’s a vendor, Amazon, dominating the CSP (Cloud Service Provider) market (according to some statistics, Amazon Web Services – AWS – holds 70 percent of the IaaS; by the way we should be honest and thank Amazon for popularizing IaaS and making it affordable, accessible, and broadly relevant to the current IT market, and for keep innovating about), and achieving that its service APIs are becoming, in some way, a “de facto” standard since other vendors declare to be compatible with, as I’ll show further in this post. Therefore, the question is:
Q: Could Amazon’s Cloud APIs become interoperable?:
A: It might could happen (in fact Amazon seems to be made some steps to be ready, if necessary or forced by business needs, for going ahead in such way), but I don’ not think so, and we must remind that APIs specification is not open but proprietary, and Amazon could change whenever it wants by itself. So, currently and almost sure in the future you are captive on Amazon if your application is developed on its APIs.

(Note: I do not forget the virtualization market leader VmWare, that is trying to become a big Cloud player too, but the current VmWare’s Cloud offering is restricted to the VmWare ecosystem, so the problem is the same)

Getting to the point and referring to IaaS, there are several open source cloud platform that enable us to build this sort of services: mainly, OpenStack (my personal bet), Eucalyptus, OpenNebula and CloudStack let’s compare them from different points of view:

1) Let’s start, analyzing its origin, its community support, and main service coverage:
 
OpenNebula, emerged from a European research project, i.e. it was funded initially by European infrastructure grants, and is now doing rather well in deployments both inside Europe and overseas. Although some large companies such as Research In Motion, Telefonica and China Mobile also contribute to OpenNebula, they don’t determine the future of the service. However, In March 2010, the main authors of OpenNebula founded C12G Labs to provide the value-added professional services that many enterprise IT shops require for internal adoption and to allow the OpenNebula project to not be tied exclusively to public financing, contributing to its long-term sustainability. OpenNebula.org is a project now managed by C12G Labs.
Eucalyptus also emerged from academic research, this time at UC Santa Barbara. Sony, Puma, Trend Micro and other companies have chosen it to deploy their private clouds. Eucalyptus has a free version and a commercial edition (Information on differences in functionality is available here). Obviously, the commercial edition comes with much more extended functionality. Like any other open-source product, Eucalyptus has a powerful community that contributes to platform development and assists in finding and fixing bugs.

CloudStack is a platform for building public or private cloud deployments that Citrix acquired in 2011 when it purchased Cloud.com and on April 2012 Citrix announced that it is giving CloudStack an Apache license, creating a competing model for open source cloud deployments. Until Citrix was one of the more than 150 companies contributing to the OpenStack project (further explained). Citrix officials said the move was made because it needs a model that fully embraces Amazon Web Service compatibility – which it says OpenStack does not do – and because it wanted to bring cloud development offerings to market as soon as possible. With an Apache license, CloudStack is now an open source project, backed by a community of developers. A number of well-known information-driven companies, such as Zynga, Nokia Research Center and Cloud Central, have deployed clouds using CloudStack. CloudStack is distributed for free under the GNU Public License v3. There is an online community ready to provide timely technical support for free. There is also an IRC channel where everyone is welcome to ask questions.

OpenStack is an open-source platform for deploying clouds IaaS. It started in summer 2010 when RackSpace and NASA jointed its initial products: Nebula for bare compute (virtual machine and block storage) and Swift for massive object storage. Currently, OpenStack is lead by an Foundation (integrated by 850 companies and 4.500 individual members) and has a broad range of support from major tech industry players, ranging from HP, Dell, IBM, RackSpace, NASA, Cisco, NEC, AT&T, Bull, EMC, Brocade and dozens of other companies. All of the code for OpenStack is freely available under the Apache 2.0 license and it haves the largest and the most active community (of the four analyzed in this post), currently integrated by around 7.000 persons over 87 countries. Since, it’s an ubiquitous platform used by both enterprises to deploy private clouds and service providers to launch public clouds, so all kind of companies are making business with OpenStack (from IaaS Providers as HP, RackSpace or TISSAT that have constructed their public services on it, to big user as Cisco that use it for its WebEx service, ranging through NASA, AT&T, Canonical-Ubuntu, San Diego Supercomputer Center ,University of Purdue, University of Melbourne, Telvent, Internap, etc.)..
 
 
2) Now, let’s face, as announced, its relationship with Amazon, I mean, this support of Amazon’s APIs:

OpenNebula current version (OpenNebula 3.8 or Twin Jet) brings valuable contributions from many industry members of its large user community, including new innovative features and enhances its AWS and OCCI API implementations and the integration with VMware and KVM, which are the most widely used hypervisors in OpenNebula clouds.

Eucalyptus Designed to emulate Amazon computing capabilities using computers inside any data center, Eucalyptus has for years presented itself as a logical adjunct to Amazon usage: As a result, all the scripts and software products based on the Amazon API can be easily employed for your private cloud. Actually, they signed an agreement with Amazon in 2012, that it also made it easier for Eucalyptus to bill their product as the natural partner to Amazon’s offerings. However that could be a double-edged sword: if OpenStack or other becomes a credible threat for Amazon, then it could simply buy (or replicate) Eucalyptus, not in order to support innumerable private clouds forever, but to smooth the path and drag reluctant corporate server-huggers ever closer to Amazon’s all-consuming data centers.

CloudStack Apart from having its own full-featured RESTful API, the platform supports CloudBridge Amazon EC2, which enables converting an Amazon API into a CloudStack API (a list of the supported commands can be found here). Besides, CloudStack also provides an API that’s compatible with AWS S3 for organizations that wish to deploy hybrid clouds.
On OpenStack, developers can automate access or build tools to manage their resources using the natives OpenStack RESTful APIs. OpenStack also has, at least at the moment, has AWS EC2 compatibility API, and also supports AWS S3 API. However, in the market is growing the belief that OpenStack is not going to do much more (in the future) to support AWS integration, in fact, that was the reason that Citrix argue for migrating away from the OpenStack project and releasing its CloudStack platform to the Apache Software Foundation.


In short,
  • Eucalyptus, OpenNebula and CloudStack have embraced Amazon Web Services (AWS) APIs to have compatibility with AWS’s application programming interfaces. OpenStack, while so far supporting AWS in its open source code, and has taken a much different approach and is attempting to position itself as an open-source alternative to AWS.
  • All of them, also supports the OCCI API, or are in its way of do it, enabling (or easing) the future cross interoperability.
  • Besides, while Eucalyptus, CloudStack and OpenNebula are open product offered and supported by a company (besides of a community), OpenStack is an full open source code that vendors or end users can adopt themselves to manage a cloud.
  • Finally (referring to the one analyzed in this post) all of them offer or enable to build services similar to the Amazon EC2, but OpenStack is the only one of them that span the bare compute service, and offers scalable object storage for petabytes of accessible data as the ones of Amazon’s S3 Services, besides of traditional block storage.
In a future post I’ll finalize this IaaS Open Platforms comparison.

jueves, 24 de enero de 2013

Interoperability: a key feature (even a must) to ask your Cloud Service Provider for. (& OpenStack)


I’m a fan of OpenStack and, as promised in some previous posts (for example in “Cloud SLAs: a technical point of view”), I’m going to explain why we are using OpenStack.
 
First of all, let me copy a very short and light explanation of OpenStack, without explaining neither its components nor its architecture, but underlying two main facts (that they found my reasons):
  • Established by NASA and Rackspace in 2010, the OpenStack open-source cloud project has done a remarkable job of attracting attention to itself over two short years. The project now lists over 150 participating companies including major players like Intel, Dell, HP, IBM, and Yahoo, and so on.
  • The OpenStack project as a whole is designed to “deliver a massively scalable cloud operating system” To achieve this, each of the constituent services are designed to work together to provide a complete Infrastructure as a Service (IaaS). This integration is facilitated through public application programming interfaces (APIs) that each service offers (and in turn can consume). While these APIs allow each of the services to use another service, it also allows an implementer to switch out any service as long as they maintain the API. These are (mostly) the same APIs that are available to end users of the cloud.
 
We’re using OpenStack not only for R&D projects (that are partially funded by the European Union Commission’s Programme as “RealCloud” and “CloudSpaces”) but also for making business. There are a lot of good reasons for it, all of them important, but I’d have to choose only one, right now I’d say “INTEROPERABILITY” (since the market is still too young for speaking of standardization).
 
Without doubts security is the most important Cloud risk, but not the only one (see my Spanish post ¿”Nubarrones” en la Nube? whose title means something like “Are dark clouds over the Cloud?”), and it’s starting to fade and lose importance(1) against others; in fact, as the time goes by, new risks are becoming more important for the CSOs of big companies (75% of them “are confident in security of their data currently stored in the cloud”, according to a recent VmWare report) that are already using Cloud Services (order implies nothing):
  • SLAs,
  • portability,
  • vendor lock-in,
  • standardization,
  • learning curve,
  • integration,
  • change management,
  • and so on.
 
And, as above shown, at least 3 or 4 of them are about to interoperability: how can we move our application (the one that we use to offer Cloud SaaS Services to internal or external customs) from an IaaS Provider to another, how can we combine different IaaS Providers services, or how to avoid becoming captive of any vendor; in fact the lack of interoperability between Cloud services generates what is known as vendor lock-in: a best decision made now, later may leave a customer trapped with an obsolete provider, simply because the cost to switch from one provider to another is prohibitively expensive. In summary, pricing, reliability, geographic location and compliance can vary between Clouds Providers. Moreover, business requirements will evolve over time, necessitating the ability to move between Clouds: whether public to private, private to public or between public cloud providers.
 
In fact, according to EU Commissioner for Digital Agenda, one of the most relevant policy actions, which should be included in the European Cloud Computing Strategy to create a “cloud friendly and proactive environment” in the EU, is “Promoting Standardization and Interoperability” as it is stated in both the “Quantitative Estimates of the Demand for Cloud Computing in Europe and the Likely Barriers to Take-up“ document, that is the result of a study carried out by IDC EMEA in the period October 2011-June 2012 on behalf of DG Connect of the European Commission, and in the “COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, about Unleashing the Potential of Cloud Computing in Europe”.
 
Furthermore, I need also to remind that there are different ways of migrate an existing application to the cloud or to create a new one as the initial step to be able to offer to your customers (either internal, i.e. inside your company, or external, i.e. as a public service) a new SaaS (service). In a previous post (written in Spanish: “Migrando aplicaciones a la nube”) I analyze how some of them builds fake SaaS services, and others let’s build real SaaS services (i.e. services that accomplishes NIST’s Cloud Definition); and between the latter there are two main approaches:
  • In the first approach the application (that will support the SaaS) is build directly on an IaaS platform, taking advantage of IaaS APIS for allowing to the application to control by itself de infrastructures of the cloud, managing the underlying resources and asking for more resources when needed, or releasing what are not used (for example, when it detects the number of concurrent users is increasing it ask for and gets more computing resources, -virtual servers, storage resources, and so on)
  • In the second one, the application is build on PaaS Platform where it will run too.
 
Finally, getting to the point, no one doubts that Amazon is the main reference and competitor (and innovator) for Cloud Computing: Amazon cloud services (Amazon EC2, Amazon S3, and so on) keep growing both bare IaaS services and client application that call Amazon’s API: but in this case this new (or redesigned) application become captive of Amazon. Other proprietary solutions have the same problem: for example, virtualization specialist VMware is increasingly helping its customers to transform their corporate data centers into mini clouds, powered (of course) by VMware’s software, and if use its API’s for develop a real Cloud SaaS, you probably will go captive of them.
 
Someone could argue that was the reason why PaaS appeared: In fact, PaaS Providers are increasing their business as their platform are reaching more clients, however the application become captive of that platform too (Salesforce.com’s Force.com Google’s App Engine, Windows’ Azure). Of course, there are open PaaS initiatives as Cloud Foundry and its Cloud Foundry Core a program designed to preserve cloud application portability, (CFC is a baseline of common capabilities for the components of a PaaS offering that applications depend on, these capabilities include runtimes and services that are built with open development frameworks and technologies as Java, Ruby, MongoDB, MySQL, PostgreSQL, etc., that developers can use to build portable applications) but at the moment they are still immature and without a real industry support: an important point for granting interoperability (besides OpenFoundry, as above stated, is a PaaS platform, while OpenStack is an IaaS platform).
 
So, how companies can migrate existing applications or develop the new ones in the Cloud with becoming captive of one platform? Of course, the answer is interoperability: customers will be, in such way, able to easily move their applications from one OpenStack based IaaS provider to another without having to alter their own Cloud designed programs. They can even download a copy of OpenStack to run inside their own data center as well, which (in principle) makes it feasible to move computing jobs from a private data center to commercial clouds and back again, at will: Moreover, with OpenStack hybrid cloud construction becomes easy.
 
For some customers, this portability might be critical for the way they run their IT. For other, it’s simply an insurance policy; a comforting demonstration that they can move, but they think they won’t needed (could they be sure in this fast-changing IT world?). TISSAT as an IaaS Cloud Provider wants to offer that portability based freedom to our customers, and that’s one of the main reason we use OpenStack.
 
However OpenStack, is far from to be alone in providing open cloud infrastructure: those in need of some open source cloud infrastructure could also turn to Eucalyptus, OpenNebula, CloudStack, and others; but among all of them, OpenStack is my favorite open Cloud Platform, why? I promise to show my reasons about in one of the next posts comparing it against other open platform.
 
Note (1) Of course, Cloud Security concerns still remain high, but they are changing from infrastructure technology based security risks to laws and standard compliance related subjects what shows there’s higher confidence in the Cloud Service Provider and the solutions they use to offer their service. More details in the post “An infographic about Security and other Cloud Barriers”.

miércoles, 16 de enero de 2013

Servicios Smart City en modo SaaS, 2ª parte de la entrevista de enerTIC

Hoy pongo la segunda parte de la entrevista que enerTIC realizó hace unos días a nuestra Directora Comercial, Carmen García, y cuya 1ª parte puse en mi anterior post. Esta parte de la entrevista versa de los Servicios Smart City que Tissat está ofreciendo en modo SaaS:

5. Tissat ha lanzado recientemente una nueva plataforma Smart City, que ofrece servicios SaaS a los ayuntamientos. ¿A qué tipo de corporaciones locales va dirigido? ¿Qué servicios ofrece?
El servicio está dirigido a cualquier localidad que tenga la iniciativa de ofrecer servicios de valor a sus habitantes. No obstante, este servicio presenta características que lo hacen especialmente interesante para ayuntamientos a partir de 15.000 habitantes. Cuanto mayor es la localidad mayor es la oportunidad de aportar ahorro y utilidades de valor a sus ciudadanos.
La nueva oferta SaaS proporciona, apoyándose en nuestro Green Data Center de Tissat (Walhalla, certificado Tier IV), soluciones de portal del ciudadano y servicios de participación ciudadana, sistema de información colaborativa, servicios de geoposicionamiento, ‘opendata’ y ‘bigdatamining’, con un alto nivel de personalización.
 
6. ¿Qué ventajas pueden obtener los ayuntamientos frente a una solución interna?
Nuestras soluciones permiten trasladar los costes de propiedad del servicio fuera del presupuesto municipal, permitiendo que la amortización de la inversión se produzca al mismo ritmo que aporta valor a sus ciudadanos. Asimismo, hace posible compartir los costes de operación con otros municipios y empresas y dotar al entorno de seguridad e integridad con la máxima certificación existente en el mercado. También contribuye a aprovechar la experiencia de otros municipios en la aplicación de tecnología en los procesos, proyectos y relaciones de la localidad, al tiempo que se dispone de un entorno abierto y escalable, que facilita la evolución y la mejora del sistema sin depender de amortizaciones previas o grandes inversiones.
Como valor diferencial nuestra oferta se basa en un enfoque que hace de la ciudad una plataforma de desarrollo social y económico de sus habitantes, tanto los ciudadanos como las organizaciones, ofrece el servicio de Smart City como SaaS, incluso en el caso de soluciones personalizadas, y además permite a los municipios encontrar nuevas vías de financiación diferentes a la subida de impuestos.
 
7. ¿Qué ofrece el servicio dentro del papel general de las TIC para conseguir mayores niveles de eficiencia energética?
El ahorro energético está contemplado desde el principio, ya que la plataforma se aloja en el centro de datos de Tissat, que es el único Tier IV comercialmente disponible del Sur de Europa, además de ser ecoeficiente. Es decir, produce su propia energía a través de la ‘trigeneración’ mediante motores a gas en esta primera fase y mediante pila de hidrógeno en una fase posterior.
 
Datos clave de Walhalla
  • Certificación Tier IV. Primer Tier IV certificado por Uptime Institute en el sur de Europa.
  • Edificio Sostenible. Power Usage Efectiveness, PUE < 1,15
  • Energía ‘verde’. Producción de energía con ‘trigeneración’ y ‘freecooling’. Además dispone de un sistema quíntuple redundante.
  • Distribución overhead de potencia, datos y clima.
  • Control Holístico e Inteligente del DC: DCIM, es decir, Integración entre sistemas de gestión de TI y de infraestructura de energía. Cuenta con el primer software español diseñado para gestionar centros de datos según la ISO 20000, ISO 27000, ISO 50001 y PAS-55.
  • Premios. Galardonado por “Datacenter Leaders Awards 2010al Primer Datacenter de Tamaño Medio de Europa.
  • Centro ecoeficiente de experimentación en tecnologías avanzadas de la comunicación. Proyecto cofinanciado por el Ministerio de Ciencia e Innovación dentro del Plan Nacional de Investigación Científica, Desarrollo e Innovación Tecnológica 2008-2011 y el Fondo Europeo de Desarrollo Regional (FEDER) con número de expediente PCT-430000-2009-31.

jueves, 10 de enero de 2013

“La Eficiencia Energética es un factor competitivo esencial en el mercado del outsourcing”: Entrevista de enerTIC a Carmen García, Directora de Tissat Madrid.

Carmen García



Hoy extraigo parte de la entrevista que enerTIC acaba de realizar a nuestra Directora Comercial, Carmen García, y cuya conclusión general puede ser la siguiente:

“La Eficiencia Energética es un factor competitivo esencial en el mercado del outsourcing”

Acorde a nuestro lema “The Mission Critical Outsourcing Company”, la Eficiencia Energética es para Tissat un objetivo prioritario. Este compromiso es patente en su centro de datos Walhalla, un centro ecoeficiente de servicios para el mercado TIC de muy alta prestaciones. Asimismo, la compañía está contribuyendo a que los ayuntamientos avancen hacia el concepto de ‘Ciudad Inteligente’ proporcionando servicios de Smart City en modo SaaS (Software as a Service).
 
 
1. Cómo compañía dedicada al outsourcing de misión crítica y con un centro de datos de alta capacidad, ¿qué importancia concede Tissat a la Eficiencia Energética?
Es imprescindible. La competitividad en el campo del outsourcing está altamente relacionada con la eficiencia energética en el CPD. Por otra parte, el grado de sensibilización con estos temas es creciente y viene siendo habitual que los clientes soliciten estudios de cómo mejora el impacto ambiental de su compañía con un outsourcing en nuestro centro de datos.
 
2. ¿Cómo contempla el Plan de Calidad Ambiental y Energética de la compañía la Eficiencia Energética? ¿Qué logros se han conseguido?
La compañía ha apostado por la ‘trigeneración’ como medio esencial para disponer energía de forma segura, económicamente rentable y medioambientalmente eficaz, ya que reduce las emisiones de CO2 a la atmósfera con respecto a un CPD de refrigeración convencional. En este sentido, también hemos apostado por el método ‘free cooling” de refrigeración. Gracias a este enfoque hemos conseguido eliminar la factura eléctrica proveniente del sistema de climatización. Además Tissat ha implantado métodos de control de encendido de luces y control de energía reactiva mediante baterías de condensadores. Los próximos objetivos son la generación a partir de ‘pilas de combustible’ para conseguir mayores niveles de eficiencia eléctrica y térmica.
En Tissat, por otra parte, también concedemos una especial atención a la formación del personal en este campo para conseguir los objetivos marcados. Constantemente y, a lo largo de todo el año, se dan cursos de formación en materia energética a todo el personal de reciente incorporación y formación continua al personal residente.
 
3. ¿Qué supone Walhalla comparativamente en Eficiencia Energética a escala estatal? ¿Con qué certificaciones cuenta?
Walhalla es el primer centro de datos con centro de energía propio. Ello le ha valido el prestigioso premio ‘Innovation in the Medium Data Centre’, otorgado por el Data Centre Leaders Awards. También ha obtenido las siguientes certificaciones: ISO 50.001 Eficiencia Energética, ISO 14.001 Medio Ambiente, ISO 20.000 Gestión de Servicios TIC, ISO 27.001 Seguridad, ISO 9.001 Calidad, UNE 166.002 I+D+i y CMMI-L2.
 
4. ¿Qué sistemas y tecnologías están utilizando para conseguir una mayor Eficiencia Energética? ¿Mantienen alianzas con otras entidades en este campo?
En producción de energía eléctrica utilizamos la tecnología de cogeneración” por ciclo “Otto” turboalimentado de alta presión media efectiva. En cuanto al sistema de refrigeración utilizamos el ciclo de ‘absorción’ con un diseño propio de tipo dual, el cual aprovecha tanto los gases de escape como el agua de refrigeración de los motores, elevando el COP del 0.7 convencional al 1.05 en Walhalla. También tenemos alianzas estratégicas en el campo del desarrollo de la Eficiencia Energética con la firma multinacional MTU. Por otra parte, en Walhalla se ha optado por equipamiento TIC de bajo consumo y se aplican técnicas cloud y de virtualización que reducen tanto el espacio necesario para el alojamiento como los consumos globales.
 
El resto de la entrevista se puede consultar en este enlace (o esperar a mi próximo “post”).
 
Datos clave de Walhalla:
  • Certificación Tier IV. Primer Tier IV certificado por Uptime Institute en el sur de Europa.
  • Edificio Sostenible. Power Usage Efectiveness, PUE < 1,15
  • Energía ‘verde’. Producción de energía con ‘trigeneración’ y ‘freecooling’. Además dispone de un sistema quíntuple redundante.
  • Distribución overhead de potencia, datos y clima.
  • Control Holístico e Inteligente del DC: DCIM, es decir, Integración entre sistemas de gestión de TI y de infraestructura de energía. Cuenta con el primer software español diseñado para gestionar centros de datos según la ISO 20.000, ISO 27.001, ISO 5.0001 y PAS-55.
  • Premios. Galardonado por “Datacenter Leaders Awards 2010al Primer Datacenter de Tamaño Medio de Europa.
  • Centro ecoeficiente de experimentación en tecnologías avanzadas de la comunicación. Proyecto cofinanciado por el Ministerio de Ciencia e Innovación dentro del Plan Nacional de Investigación Científica, Desarrollo e Innovación Tecnológica 2008-2011 y el Fondo Europeo de Desarrollo Regional (FEDER) con número de expediente PCT-430000-2009-31.

jueves, 3 de enero de 2013

An infographic about Security and other Cloud Barriers

I know that Cloud Security, as well as other important Cloud barriers, is a subject already treated in this blog several times (e.g. here, here and here) but, in fact, it is a recurring theme in the current Cloud Computing area. So I decided to copy this infographic about the subject.

But, I want to emphasize (since it back my opinion) 2 points:
    1. In one hand, some concerns about public cloud security are starting to fade, as it’s showed in this infographic when comparing the results of 2012 survey with the 2011 one. Of course, security concerns still remain high, but they are changing from infrastructure technology based security risks to laws and standard compliance related subjects what shows are higher confidence in the Cloud Service Provider and the solutions they use to offer their service.
    2. In the other hand other concerns are growing, as also showed in the picture (compliance, loss of control, complexity, and so on); however, in my opinion two of the more important: lack of standardization (and its several consequences as vendor lock-in, no interoperability, and so on), and weak SLA are missed (according to the data of the survey that this infographic summarizes survey).

Note: About SLA, I should mention specially the one related to service unavailability, since according to International Working Group on Cloud Computing Resiliency, each year a cloud computing is usually down for an average of 7.5 hours. And yes, I know that availability and other SLA measures security properties (see my post titled “Cloud SLAs: a technical point of view“).
 
(Note: the source of this infographic could be found here: http://www.andrewhay.ca/archives/2224).
 
Cloud_security_Survey_2012-infographic