martes, 1 de julio de 2014

Tissat awarded as one of best “EU Code of Conduct for DataCentres” practicioners by the European Commission


I’m proud to announce that last month (May, the 28th) Tissat received the annual award of the European Commission to one of the best practitioners of the “EU Code of Conduct for DataCentres” for its DataCentre “Walhalla”, in Castellon, Spain.
 
 
This award is the results of the Research & Development activities and projects executed by Tissat in the DC energy efficiency arena: from ones partially funded by the Spanish Government Agencies (“Green DataCenter”, “CPD verde” or “RealCloud”) to other partially funded by the European Commission (as “CloudSpaces”).
 
Picture of the European Commission Award
Picture of the European Commission Award

 

jueves, 8 de mayo de 2014

Gathering R&D results: Inauguration of the NEW Management System of Emergencies (112) of “Region of Murcia”, based on TISSAT’s ECHO platform


Yesterday was officially inaugurated the new Management System of Emergencies (112 phone) of Region of Murcia (112-RM). It uses the ECHO Platform developed by Tissat; it’s the result of the last years R&D activities in this subject. It is important to bold that ECHO price is a quarter of the cost of other similar platforms.
 

Some links to this new could be found in different newspapers and media (in Spanish):


 

Recogiendo resultados de la I+D: inauguración oficial del Nuevo Sistema de Gestión de la Atención de Emergencias del 112-RM, basado en la Plataforma ECHO de Tissat

Ayer fue la inauguración oficial del Nuevo Sistema de Gestión de la Atención de Emergencias del 112 de la Región de Murcia, basado en la Plataforma ECHO (Emergencias Control Holístico Operativo) de Tissat, y resultado de sus proyectos de I+D+i: ECHO cuesta cuatro veces menos que la adquisición de otros productos desarrollados por empresas especializadas y los costes de mantenimiento son aproximadamente la mitad.

La noticia ha sido recogida en varios medios:

Yo por mi parte, me permito destacar lo siguiente:
  • El Consejero de Presidencia y Empleo, José Gabriel Ruiz, destaca que “es el primer centro de España capaz de integrarse con otras plataformas operativas de gestión de emergencias similares, como la de la Unidad Militar de Emergencias”
  • La Región de Murcia, la UME y la Armada realizan mañana un simulacro de respuesta ante una emergencia por incendio forestal en la Estación Naval de La Algameca.
  • La plataforma tecnológica ECHO ha sido desarrollada por TISSAT, y puede crecer según las necesidades del 1-1-2
  • ECHO cuesta cuatro veces menos que la adquisición de otros productos desarrollados por empresas especializadas y los costes de mantenimiento son aproximadamente la mitad.
 
 

domingo, 20 de abril de 2014

“Icehouse” release of OpenStack has just been delivered

This post is only to remember that, as foreseen, just a couple of days ago (Thursday, the 17th) the new version of OpenStack, named Icehouse, was released.
 
As Stefano Maffulli says in its e-mail to the OpenStack community, IT IS THE RESULT OF THE EFFORT OF 1.202 PERSONS, from 120 organizations, that contributed to its development.
 
Approximately 350 new features has been added (rolling upgrades, federated identity, tighter platform integration, etc) , but in my opinion the most significant is that “OpenStack Database Service” (Trove), which was incubated during the Havana release cycle, is now available.
 
Other programs still in incubation (already developing during Icehouse) are Sahara (OpenStack Data Processing, i.e. to provision a Hadoop cluster on OpenStack), Ironic (OpenStack Bare Metal as a Service), Marconi (OpenStack Messaging) and, and we hope they go live in the next release of OpenStack, code-named Juno, foreseen in 6 month.
 
In Tissat we have been testing the last beta versions and they look great, and we are starting to plan the migration IN LIVE.
 
Quoted from the the official press release these are the main features (module by module):
  • OpenStack Database Service (Trove): A new capability included in the integrated release allows users to manage relational database services in an OpenStack environment.
  • OpenStack Compute (Nova): New support for rolling upgrades minimizes the impact to running workloads during the upgrade process. Testing requirements for third-party drivers have become more stringent, and scheduler performance is improved. Other enhancements include improved boot process reliability across platform services, new features exposed to end users via API updates (e.g., target machines by affinity) and more efficient access to the data layer to improve performance, especially at scale.
  • OpenStack Object Storage (Swift): A major new feature is discoverability, which dramatically improves workflows and saves time by allowing users to ask any Object Storage cloud what capabilities are available via API call. A new replication process significantly improves performance, with the introduction of s-sync to more efficiently transport data.
  • OpenStack Block Storage (Cinder): Enhancements have been added for backend migration with tiered storage environments, allowing for performance management in heterogeneous environments. Mandatory testing for external drivers now ensures a consistent user experience across storage platforms, and fully distributed services improve scalability.
  • OpenStack Networking (Neutron): Tighter integration with OpenStack Compute improves performance of provisioning actions as well as consistency with bulk instance creation. Better functional testing for actions that require coordination between multiple services and third-party driver testing ensure consistency and reliability across network implementations.
  • OpenStack Identity Service (Keystone): First iteration of federated authentication is now supported allowing users to access private and public OpenStack clouds with the same credentials.
  • OpenStack Orchestration (Heat): Automated scaling of additional resources across the platform, including compute, storage and networking is now available. A new configuration API brings more lifecycle management for applications, and new capabilities are available to end-users that were previously limited to cloud administrators. Collaboration with OASIS resulted in the TOSCA Simple Profile in YAML v1.0, demonstrating how the feedback and expertise of hands-on OpenStack developers can dramatically improve the applicability of standards.
  • OpenStack Telemetry (Ceilometer): Improved access to metering data used for automated actions or billing / chargeback purposes.
  • OpenStack Dashboard (Horizon): Design is updated with new navigation and user experience improvements (e.g., in-line editing). The Dashboard is now available in 16 languages, including German, Serbian and Hindi added during this release cycle.
 
And these are other interesting links:

martes, 1 de abril de 2014

Different NTT and Peer 1 surveys reveal changes in cloud buying patterns after discovering NSA activities

About November, 2013, I wrote in this blog a 3 post series about one of the consequence of the NSA espionage, as the data disclosed by Snowden were been made public and analyzed; they were titled Personal Data Privacy & (Europe’s) Cloud Regulation: (I) The dilemma, (II) The privacy approach; and (III) Resignation?? . In these posts I made my own conclusions:
  • Finding the balance between Personal Data Privacy and Business Regulation is keyand it’s not easy to solve this dilemma, even harder when the business is around a technology like the Cloud where free movements of data is intrinsic a one of its advantages.
  • Data security must be improved (the use of strong encryption that can protect user data from all but the most intense decryption efforts).
  • Finally, another worrying reflection to made is that NSA has shown that it is also subjected to the same risks of Data Loss (it doesn’t matter the way) as any other Business, and Snowden is certainly not the only one who had access to those private data of other people
  • I understand better why (although very slowly) the European Commission wants to regulate more strictly about some related subjects, despite that measures may cause a negative impact in both business and innovation.

Now it’s been found that NSA activities changed cloud buying patterns, according to NTT and Peer 1 different surveys.
 
On one hand, a survey conducted by NTT communications (titled “NSA Aftershocks: How Snowden has changed IT decision-makers’ approach to the cloud”) shows the consequences of the NSA activity in the US mainly, but aslo in Canada, UK, France, Germany, Hong Kong. And, from my point of view, the main results found are:
  • Almost nine tenths of ICT decision-makers are changing their cloud buying behaviours in the wake of Edward Snowden’s cyber-surveillance allegations.
  • Only 5% of respondents believe location does not matter when it comes to storing company data
  • It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US
Please let me quote (and extract) this 9 point summary of report conducted by NTT communications, according to Press Release published by the same NTT:
1) 88% of ICT decision-makers are changing their cloud buying behaviour, with 38% amending their procurement conditions for cloud providers
2) Only 5% of respondents believe location does not matter when it comes to storing company data
3) 31% of ICT decision-makers are moving data to locations where the business knows it will be safe
4) 62% of those not currently using cloud feel the revelations have prevented them from moving their ICT into the Cloud
5) ICT decision-makers now prefer buying a cloud service which is located in their own region, especially EU respondents 97% and US respondents 92%
6) 52% are carrying out greater due diligence on cloud providers than ever before
7) 16% is delaying or cancelling contracts with cloud service providers
8) 84% feel they need more training on data protection laws
9) 82% of all ICT decision-makers globally agree with proposals by Angela Merkel for separating data networks
Note: The survey questioned 1,000 ICT decision makers on their approach to the Cloud, and took responses from decision-makers in France, Germany, Hong Kong, the UK and the US.
 
Besides, on the other hand, Peer 1 surveyed 300 companies about storing data in the US to analyze the effects of the NSA activuty (after Snowden revelations) and they found that (let me add to the previous NTT’s list):
10) It found 25% of UK and Canadian IT decision makers said they had made plans to move company data outside of the US.
Note: See this DataCenter Dynamics news for more details.
 
Finally it’s well know the change of policy in Microsoft some months after Snowden scandal:
11) Microsoft allowed its foreign customers to move personal data stored on servers outside of the US in January following the scandal.
 
Coming back to the NTT Report, its Vice President of Product Strategy in Europe, Len Padilla, said the results show the NSA allegations have changed ICT decision-makers attitudes towards cloud computing and where data is stored.
He said decision makers, however, need to keep in mind the benefits that cloud can bring to business services. And he also adds:
“Despite the scandal and global security threat, business executives need to remember that cloud platforms do help firms become more agile, and do help foster technology innovation, even in the most risk-averse organizations”
“ICT decision-makers are working hard to find ways to retain those benefits and protect the organization against being compromised in any way. There is optimism that the industry can solve these issues through restricting data movement and encryption of data”.

miércoles, 12 de marzo de 2014

TISSAT RENUEVA CON ÉXITO LA CERTIFICACIÓN DE SU SISTEMA DE GESTIÓN INTEGRAL

La empresa sigue siendo pionera en certificación de normas, desde 2009, año en que se convirtió en la primera empresa española en certificarse en Gestión de Servicios, bajo el marco del estándar ISO 20.000
 
Tissat, 10 de marzo de 2014.-  Tissat renueva un año más su Sistema de Gestión Integral compuesto por las Normas Gestión de Calidad UNE-EN ISO 9001, Gestión de Servicios UNE-ISO/IEC ISO 20.000, Gestión de Seguridad UNE-ISO/IEC 27.001, Gestión Medio Ambiental UNE-EN ISO 14.001, Gestión Energética UNE-EN ISO 50.001,Gestión de I+D+i UNE 166.002 y Sostenibilidad Energética (SECPD EA:0044)
 
Estas normas garantizan tanto la Gestión Medioambiental como Energética de sus Datacenters, además de la Gestión de Seguridad y la Operación de Servicios Housing/Hosting, Conectividad, Almacenamiento y Copias de Seguridad, Contact Center, Correo Electrónico y  Servicio de Recuperación ante desastres (DRP) tanto en su Datacenter de Paterna, como en su Datacenter de Castellón, certificado éste último como TIER IV en diseño por el Uptime Institute dentro del marco TIA-94.
 
“El éxito final de certificaciones en normas tan dispares para una organización de outsourcing de misión crítica como Tissat, en conjunto con otras certificaciones que ya posee (CMMI L2, EU Code of Conduct on Data Centre)  la posiciona en una situación muy ventajosa y competitiva dentro del ámbito de las TIC, siendo Tissat una de las empresas con más certificaciones hasta la fecha”, señala Carmen García, Directora de Tissat Madrid.
 
Tissat ha apostado desde sus inicios por la innovación y la calidad. De hecho, en la actualidad sigue siendo pionera en la certificación de normas, desde el año 2009, en que se convirtió en la primera empresa española en certificarse en Gestión de Servicios, bajo el marco del estándar ISO 20.000; en 2012 se volvió a convertir en empresa de referencia al ser la primera en certificar ambos Datacenters bajo el Estándar de AENOR en Sostenibilidad Energética (SECPD EA:0044).

domingo, 26 de enero de 2014

Virtualization vs. Cloud Computing (III): business differences, plus other technical ones, and conclusions

Once again let me start reminding that this post is referred to the scope and context defined in my first post of this series (titled “Virtualization vs Cloud Computing (I): what are we going to compare?”), although at the end of this post we’ll widen it.
 
Besides, as summary, in the two previous posts we concluded:
  • Virtualization is an enabling technology for Cloud Computing, one of the building blocks used “generally” for building Cloud Computing solution (“generally” but not always, because nowadays Cloud is starting to use other technologies away from pure virtualized environments to offer  “Baremetal as a Services” …)
  • The services provided by “Cloud Computing Management Environment” and by a “Virtualization Management Environment” ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED: both the self service characteristic and location independence feature (in the sense of no location knowledge) are the main difference, but in some cases (depending on the platform) also massive scale out.
 
Going to the subject, another technological point of comparison is that Virtualization Management Environment (at the present almost none) does not offer to the user the knowledge in real time of how long it has been using the VM and other metrics of the service (“measured services” is a essential characteristic), or maybe the user can get that knowledge but not in a friendly way. The main reason for that is the different business model they were “initially” thought of:
  • Virtualization was born to take advantage of unused resources in a physical machine, solving several problems that appeared in scale-out servers: different physical characteristics for different cluster subsets (after one or more expansions), coarse granularity in the resource assignment that led to unused resources, security issues when applications from competing companies were run on the same physical machine. However, although virtualization allows to take advantage of unused resources in a secure way, in the practice it let to traditional DataCenter Service provider to pass (or add) from a (physical) “Hosting” business model to a “Virtual Hosting” model with lower prices, but the billing model was the same: in general the customer is billed in the same way as for physical hosting: a monthly fixed rate where the cost is proportional to the power (performances) of VM and the associated resources it has contracted, but disregarding of the real usage (the user does) of the virtual machine.
  • Cloud Computing was born to “allow” a real pay-per-use model. For this reason it’s as important the self-serving feature as the capability to turn on or off the VM whenever the customer wants, because (s)he doesn’t pay for the standstill period. About this subject, please note that the technological Cloud Computing concept only defines that the services must be metered (and that information must be continuously available for the customer), what allows the provider to bill for the real usage, but it’s not mandatory to do it.
  • Of course, both business models mentioned above were the two extremes of a broad market and represent the “pure” business models, but today there are several intermediate hybrid business models: for example, cloud computing environment based model that offer discounts if you contract for long fixed period or that offer lower price-per-hour if you pay a monthly fee (one of the Amazon options) or pure technological Virtualization Management Environment that offer pay-per-use business model, and so on. AMAZON (the great Cloud innovator) is a good example of that: for example, “Reserved Instances” give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly charge for that instance (there are three Reserved Instance types: Light, Medium, and Heavy Utilization Reserved Instances, that enable you to balance the amount you pay upfront with your effective hourly price), the also offer volume discounts or “Spot Instances”, and so on.
 
Finally, concerning to the comparison points in the initial (reduced) scope defined, new customers needs are emerging to deploy applications on your physical servers, as well as your virtual servers, but keeping all the cloud model advantages (and essential characteristics): that’s the case, for example, when your application requires physical servers, or your production environment is too performance sensitive to run in VMs. Actually you don’t need to have a virtualized environment to be considered a cloud environment: your “virtual” instance might be a “container” which is not virtualized but running on bare metal (just sharing it with other containers) or even running directly and using completely the bare metal: “containers”, as aforesaid, are considered by some authors as sort of virtualization; so let me present an example of the latter case: OpenStack is currently developing the new “Iron” module that it’ll provide “Baremetal as-a-Service”, so  it’ll be possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. Therefore, cloud technology is starting to use other technologies away from pure paravirtualized environments.
 
We initially limited the scope of this comparison to “compute as a resource”, but if we slightly widen that context to include (as usual) any computing related resources, i.e, storage, and communications resources, then new differences arise (depending on the solution that was used for building both the Cloud Management Environment and the Virtualization Management Environment):
  • Most (but not all) of Virtualization Management Environments offer only compute and block storage services, but usually they do not offer Object Storage as a Service; besides they use to offer “Storage Virtualization” (SV, i.e. capacity is separated from specific storage hardware resources) but don’t offer “Software Defined Storage” (SDS), that differs from the former (SV) because in the SDS not only capacity but also services are separated from the storage hardware.
  • Moreover, and almost none of them (Virtualization Management Environments) offers communications management as a Services. I mean not only virtual networks, but also main communications devices provided as a service: router, firewallls, load balancers and so on. Moreover, the “Software Defined Networking” (SDN) it’s a technology that, as far as I now, is been currently used only in Cloud Computing Environments where this kind of services are starting to be offered. Of course, some Virtualization Environments offer this kind of communication services, but not in a self-service way and where you can self-define your internal communications layout and structure, and so on, e.g. as shown in the next picture (taken from the topology designed by a customer using the TISSAT’s Cloud Platform mounted on OpenStack):
TISSAT's IaaS Cloud Platform
TISSAT’s IaaS Cloud Platform
 
 
At the end of this 3 post series, as summary, three conclusions:
  1. The technological concepts (virtualization and cloud computing) should not be confused with the pure business models they were initially intended for: virtual hosting (a fixed monthly rate lower that physical hosting) and pay-per-use (that some person call the Cloud Computing business model), respectively. And don’t forgot that at the present there are a lot of mixed business models disregarding the underlying technology.
  2. Both virtualization and cloud computing allow you to do more with the hardware you have by maximizing the utilization of your computing resources (and therefore, if you are contracting the service you can expect lower expenses). However, although currently there is an inevitable connection between them, since the former (virtualization) is “generally” used to implement the latter (cloud), this connection could be broken soon with the arising of new technologies and innovations, and they are not exactly the same: BOTH ARE QUITE DIFFERENT IN HOW THE SERVICES ARE PROVIDED (self service feature, no location knowledge, massive scale out, even metered service in some cases) and there are some technical differences between them. Additionally, depending on user’s needs one of them could be better or not: a lot of customers have enough with server virtualization, and it could even be the best solution for their needs; but in other cases cloud is the best solution for the customer’s needs, and no virtualization.
  3. Although still circumscribed to IaaS (i.e. forgetting PaaS and Saas), when we widen the comparison scope to include (as usual) any computing related resources, (not only compute but also storage and communications resources), then new differences arise since, for example, communications related Services (routing, firewalls, load balancing, etc.) are seldom (or never) offered as a Service in Virtualized Management Environments (in a self-service way and where you can self-define your internal communications layout and structure, and so on, taking advantage of Software Defined Networking technology). Besides, another main difference is how the Storage as a Service is provided: in a Virtualization Environment use to be reduced to Block Storage, no including Object Storage (as Cloud Environments do), and provided as Storage Virtualization but not as Software Defined Storage.

Note: Please, let me add that  Tissat (the company I’m working for) is offering all this sort of real IaaS Cloud Services as well more traditional DataCenter services (such as housing, hosting, virtualized hosting, DRP, and so on) based on its Data Centers Federation (that includes Walhalla, a DC certified as Tier IV by The Uptime Institute)  and using different product and solutions (currently VmWare, OpenStack, and so on) and most of the ideas of this post series are extracted from that experience.