Is VAMP the next generation of cloud?

M2.5I have rewritten the title of this blog post at least ten times now. Anything from “replacing AWS piece by piece” to “VAMP, powering the open source wave of cloud computing”.

I don’t want this to turn into click bait, but at the same time I believe we are on the cusp of some very interesting changes in the cloud space. So why don’t I start by explaining what VAMP is, and we will get to why I think it might be a part of the next generation of cloud computing.

Let’s start out with the intro blurb on their homepage.

Vamp, or the Very Awesome Microservices Platform, takes the pain out of running complex and critical service based architectures. Vamp’s core features are a platform-agnostic microservices DSL, powerful A-B testing/canary releasing, autoscaling and an integrated metrics & event engine.

Microservice DSL…huh?

Alright, that’s a mouthful. Let’s try to break it down. It features a platform agnostic microservices DSL. So platform agnostic means it can run anywhere, and that you aren’t locked into a specific vendor like AWS or Azure, and that you can move freely between the platforms as needed. But what the heck is a microservices DSL? you might ask. DSL stands for domain specific language, with HTML being one of the best examples. You can think of it as a mini language for a specific thing, as opposed to a general purpose language like Java. Ok, great, it is a specific markup like mini-language for microservices. What the heck does that mean?

M2.4If you are familiar with AWS, I would like for you to think about what Cloudformation provides to you. Cloudformation is a templating language for provisioning AWS cloud resources, such as compute nodes and setting up virtual networks. It is a mini-languge (DSL, hint hint), that helps you provision and orchestrate things in the virtual word of cloud. It is very cool, and very powerful. You can version control entire environments using this stuff.

However, there is one caveat to this. Cloudformation is vendor specific and dependant. Cloudformation is inherently tied do AWS, and cannot run anywhere else. Most cloud vendors have their own version of this, for example Openstack with its HEAT.

So VAMP promises us a platform agnostic way to do Cloudformation-ish things, for microservices, they call their templates blueprints.

Canary releases

Let’s be honest. Just writing another DSL for orchestrating/templating cloud resources isn’t that exciting. It’s exactly what Openstack has done. So what makes VAMP different?

Vamp ties in with your CI, and can handle A-B testing/canary releases

So VAMP actually ties in with your deployment as well, not only provisioning. It will integrate with things like Mesos, zookeeper, HAProxy etc to make its magic work. The result is that you can do weighted releases of new versions on the fly for example. This is highly customizable, and can tie into HAproxy rule sets. So you can do things like release the new version of your app to customers in a specific country with a specific http header. Say whaaat?

If you are familiar with AWS, this might be a little bit of OpsWorks, but better?

Autoscaling, integrated metrics etc…

So we are starting to see that this is kind of difficult to map to AWS, but there are some distinct similarities. We have a Cloudformation type DSL, we can integrate with the continuous integration pipeline with canary releases like OpsWorks (or Elastic Beanstalk). It also promises autoscaling and integrated metrics.


In other words, we have the auto scaling functionality present in EC2, with capabilities from Cloudwatch (monitoring) in there as well. All while being platform agnostic and geared towards microservices.

So it can monitor things like response times and throughput, and use these as events to kick off scaling (=auto scaling) of underlying resources. This integrated with other things such as DCOS/Marathon to actually handle the scaling part.

Standing on the shoulder of giants

Docker, Mesos, Zookeeper, HAproxy etc

As we can see, VAMP is very cool, and can accomplish many things that leading cloud vendors like Amazon has spent years building. It looks extremely cool, and geared towards a more microservice orientated architecture.

The reason such a small team has been able to build something as cool and feature rich as this is because the surrounding ecosystem is expanding and maturing. We have emerging technologies such as Docker, Mesos, Zookeeper etc. The common thread being cloud native thinking, and open source. Projects such as Mesosphere’s DCOS is showing that we are on the verge of enterprise adoption for these technologies.

Replacing AWS piece by piece?

This is where I think my initial title might have been a bit controversial. Are we seeing AWS functionality being replaced with open source projects such as VAMP and DCOS? Why not just use established technologies that companies like Amazon and Microsoft provide?

Let me be clear, I don’t think AWS or Azure is going away any day soon. They are innovating and driving the cloud community forward. I do however think their role will start to change as we move forward. Containerization and microservice-ification as driving a much more open and platform agnostic architecture, and services such as VAMP and DCOS are helping people make this a reality.

The large cloud vendors are becoming much more commoditized, with them being providers of compute, storage and network. The thing that has been promised all along. But somewhere along the way we started seeing much more proprietary technology making its way in the ecosystem, and VAMP et al are proving to be a swing back towards more open technologies.

Why use cloudformation, cloudwatch, opsworks etc, when you could build a platform independent platform by yourself using open source such as DCOS, and VAMP on DCOS, to be completely vendor neutral going forward?

Enterprise adoption

The reason I think the vendor agnostic aspect of all of these emerging technologies is important, is because it enables hybrid cloud. What do I mean by that? Namely that enterprises can build the same stack, with all the same interfaces and tools, regardless of vendor or data center. They can deploy some workloads on their internal on premises infrastructure, and maybe some external facing mobile app (e.g. systems of engagement) on a cheap public cloud vendor such as Amazon or Google, but with all the tooling etc that they are used to.

Final words and wrapping up

Screen Shot 2016-06-10 at 16.49.01
Olaf Molenveld, CEO of

Hopefully I have made you excited about VAMP, and the related surrounding technologies. It is free to download and try out, apparently licensed under Apache License 2.0. So give it a spin.

Finally I would like to give a big ‘thank you’ to one of the founders of VAMP, Olaf Molenveld, for taking the time to showcase VAMP over Skype. Looking forward to seeing what functionality they will add next…

API Connect: APIライフサイクルの端から端まで?


apiconnect_logo幸いにInterconnectで発表されたのはOpenWhiskだけでかなくて、IBM API Connectというサービスも新しく発表されました。

今日のブログでAPI Connectと既存のソフトとサービス、例えばIBM API Management、買収したStrongloopや非クラウド的なDataPowerというげオートウェイアプリアンスとの関係や位置付けなどを解説していきたいと思い明日。今のを読んで「はっ?」と思った方、是非とも最後まで読んでいただければと思います。5分かければクリアになる(はずです・・・)




IBM WebSphere DataPower SOA Appliances is a family of pre-built, pre-configured rack mountable network devices (XML appliances) that can help accelerate XML and Web Services deployments while extending SOA infrastructure.

Screen Shot 2016-03-02 at 11.40.41つまり、DataPowerはAPIゲートウェイで、昔からのSOA時代を支えてきたものです。その歴史と伴って、製品の安定性と機能の豊富さを期待ができます。


API Management

おっと、IBM API Managementという新しいものが登場します。名前からいうと当然ですが、API ManagementはAPIを管理してくれるソフト、またはサービスです。裏でDataPowerの機能を使ってゲートウェイも実現しています。先ほどのDataPowerの高性能や安定性などの話を覚えています?そう、あの上にAPI Managementが構築されているわけです。しかし、幾つかの差別点があります。

  • API managementはAPIエコノミー向けのものであって、オンプレのSOA基盤を支えるものではありません。つまり、開発者(社内も、社外も)向けのポータルサイトがあって、セルフサービスでAPIのドキュメントやトークン発行とかができます。
  • APIを使ってB2Bコラボした時に求められる高度な認証機能(例えばカスアムOAuthなど)、ポリシー(レート制約とか)のようなAPIエコノミー向けの機能が強化されています。
  • クラウドサービスとして使えます
    • マルチテナントのパブリッククラウド:Bluemix上のAPI Managementのこと
    • シングルテナントの専用物理(dedicated):SoftlayerのIaaS上にフルマネージドサービスとして提供

Screen Shot 2016-03-02 at 11.37.18

まとめますと、API ManagementはDataPowerのセキュリティと管理可能を使います。その上にAPIドキュエメンテーション、デベロッパーポータル、APIカタログなどの様々なコラボ機能や開発者向けの便利機能が追加されました。最後に、色々分析や監視機能が入って、ビジネスユーザがAPIエコノミーのサービスの利用状況を見た上でビジネス判断ができます。

言い換えれば、あなたのAPIのセキュリティと管理をちゃんとやってくれるものが出来上がっています。Screen Shot 2016-03-02 at 12.12.47

Strongloop; いきなりの変わり者が登場


Enterprise Node to Power the API Economy

つまり、エンタープライズにでも使えるnode.jsベースのAPIエコノミー関連のものを作っている会社です。Strongloop社はNode.jsのメインコミッターでもありながら、オープンソースのExpress.js フレームワーク(“Fast, unopinionated, minimalist web framework for Node.js”)、LoopBack (Express上にできている)、APIゲートウェイやデータ連携コネクター(Oracle, SQL Server, MongoDBなどなど)のような面白いものも作っています。つまり、Node.jsを使って速くAPIをオープンな形で作れるツールやフレームワークを沢山出しています。



しかし、「DataPowerとAPI Managementと一体何の関係があるの?」と思ったら・・


この辺で混乱する方が多いです。Strongloopは既存のDataPowerやAPI Managementとどういう位置付けや今後のロードマップを気にするのが当然です。


Screen Shot 2016-03-02 at 12.13.15

API Connect:一つのサービスは全てを統べ、一つのサービスは全てを見つけ、一つのサービスは全てを捕らえて、暗闇の中に繋ぎとめる。

IBMのInterconnectでAPI Connectという新しいサービスが発表されました。

Today we announced our new offering, IBM API Connect, which integrates creating, running, managing, and securing APIs into one solution that can run on-premises and in the hybrid cloud; no other competitor offers this.

Screen Shot 2016-03-02 at 12.12.53なるほどですね。APIの構築、実行、セキュリティの担保と管理をオンプレにも、クラウドにも、すべてができるものですね。これは大きなヒントですね。

API ConnectではDataPowerのセキュリティー、API Managementの管理機能とStrongloopの構築と実行周りの機能を一つの簡単なサービスにまとめてきました。つまり、APIのライフサイクルを端から端までカバーができます。



API Connectは数え切れないほど面白いものが入っているので、すべてをカバーするつもりないのですが、個人的なお気に入りポイントを申し上げますと:

  • StrongLoopの機能を使って、早くAPIとマイクロサービスをNode.jsベースのLoopbackとExpressフレームワークでできます
  • モデルでAPIを作れます。バックエンドのシステムをコネクターで繋げて、モデルでマッピングします。
  • 統一された管理画面からNode.jsもJavaも、両方のランタイムの管理を
  • Swagger 2.0の対応は当然にAPI Manager portalに含まれています
  • 標準で準備されているポリシーを使えば、デベロッパーにとってAPI開発が楽に、早く作れるようになります。
  • Assemble view(「構成ビュー」、「組み立て画面」?)でUIを使って簡単にフローを作れます。
  • デベロッパーツールキットで簡単にAPI Managerポータルとインターフェースができます。


気づきました?そう、Node.jsにもデプロイができます。API Connect用語でいうとMicro Gatewayというそうですが、DataPowerではなくて、Node.jsにもAPIをデプロイが可能です。つまり、API Connect内にすべてのAPIの構築や管理ができていて、用途やAPIによってクラウド上のNode.jsの何かにデプロイするか、社内のオンプレのDataPowerゲートウェイにデプロイするか、自分で選ぶことができます。やろうと思えば、OpenWhiskのような画期的なNode.js実行環境にでも。これが凄すぎます。APIライフサイクル全体の管理もそうですが、オンプレのアプライアンスからクラウドのイベント駆動実行環境サービスのOpenWhiskのような物まで。ハイブリッドクラウドはこういう話ですよ。


社内のメインフレームに大量のデータがあり、コーディングなし(または簡単なnode.js)でREST APIとしてラップして、ユーザ権限などを決め、適切な認証やスロットリングの設定をし、自動的にSwagger定義と八つの言語のコードサンプルを生成し、最後に一般公開してから利用状況をアナリティクス機能で確認するだけです。こんな簡単に既存のデータをAPIエコノミー化ができるようになってきました。

Screen Shot 2016-03-02 at 11.22.53

Today we announced our new offering, IBM API Connect, which integrates creating, running, managing, and securing APIs into one solution that can run on-premises and in the hybrid cloud; no other competitor offers this.


API Connect: a complete API lifecycle offering?

Yesterday I wrote a blog post about OpenWhisk, IBM’s new event driven cloud compute service. OpenWhisk enables serverless microservice architecture, and I’m sure that some of you instantly started thinking about APIs as soon as “microservice” cropped up.

apiconnect_logoFortunately, OpenWhisk wasn’t the only new thing to be announced at Interconnect. IBM API Connect was announced as well.

This blog post was written to give insight into how API Connect relates to existing solutions, such as IBM API Management, acquired companies such as Strongloop, and non-cloudish gateway appliances such as DataPower. Confused yet? Hopefully you won’t be after reading this.

Let’s do an inventory first


Alright, if you are a cloud native type of guy like me, you might not have heard of DataPower. Let’s see what wikipedia has to say on the topic:

IBM WebSphere DataPower SOA Appliances is a family of pre-built, pre-configured rack mountable network devices (XML appliances) that can help accelerate XML and Web Services deployments while extending SOA infrastructure.

Screen Shot 2016-03-02 at 11.40.41So DataPower can be seen as an API gateway, that helped power the SOA revolution of old. Obviously, with heritage you get stability and a large feature set.

As the virtualization boom started to happen, DataPower was also offered as a virtual appliance. Neat. A stable, high performance API gateway virtual appliance that could be run on a cloud based infrastructure if one wanted to…

API Management

Here comes IBM API Management. API Management is, as the name implies, either a software or a service, depending on how you want to use it, that can manage APIs. It is based off of the DataPower gateway for its gateway functions. Remember the high performance and stability of DataPower? Yeah, the idea is to build on top of that, but provide a couple of key differences.

  • API management is geared towards the API Economy, not your backend SOA architecture. This means that it has a developer portal, where both internal and external developers can register, get API keys, API definitions and usage examples etc.
  • Many features such as advanced authentication (think custom OAuth), policy enforcement (non-paying users get throttled, but the high rollers get extremely high rates) geared towards B2B collaboration and API economy focused aspects.
  • Can be consumed as a cloud service.
    • Multi-tenant public cloud. This is the Bluemix offering of API Management.
    • Single-tenant dedicated cloud. This is API Management as a dedicated, but fully managed service, that runs on top of Softlayer IaaS.

Screen Shot 2016-03-02 at 11.37.18

To summarize, API Management uses the security and control of DataPower. It then extends that with social functions that include API documentation, API catalogs, developer portals etc. Finally, it has a bunch of analytics functions to give the business side of your company better insight to how that API economy initiative is going, and what direction to take the service in.

So, up until this point, I guess we could say that we have a really good way to secure and manage your APIs.Screen Shot 2016-03-02 at 12.12.47

Strongloop; the wild card

This is where things start to become very interesting. Strongloop, a company who’s tagline is

Enterprise Node to Power the API Economy

Strongloop is one of the main committers to Node.js, and they develop the open source frameworks such as Express.js (Fast, unopinionated, minimalist web framework for Node.js) and LoopBack (built on top of Express), and also provide a bunch of other really neat things such as API Gateways, data integration connectors (Oracle, SQL Server, MongoDB etc etc). In other words, they have a really cool line up of tools and frameworks who want to quickly build APIs in an open fashion, using Node.js.


Alright, if you’re like me, someone who likes the prospect if lightweight, open source and cloud orientated stuff, this should start to sound really interesting by now.

So, what in the world does this have to do with DataPower and API Management? 

IBM acquired Strongloop in September last year. Say what? IBM, the stuffy company that sells DataPower appliances to enterprises? That’s right.

By now I assume you are a little bit confused on how Strongloop fits in with all the existing offerings such as API Management, DataPower etc. How will they work together? What is the future direction?

In other words, Strongloop provides a really, really good way to create and run/execute your APIs.

Screen Shot 2016-03-02 at 12.13.15

API Connect: One Service to rule them all, One Service to find them, One Service to bring them all and in the darkness bind them

IBM announces API Connect at Interconnect.

Today we announced our new offering, IBM API Connect, which integrates creating, running, managing, and securing APIs into one solution that can run on-premises and in the hybrid cloud; no other competitor offers this.

Screen Shot 2016-03-02 at 12.12.53Ah, a not so subtle hint on what API Connect might be, and how it is architected.

API Connect merges the create, run, secure and manage aspects of DataPower, API Management, and various Strongloop offerings such as Loopack, the provide a suite that can handle the entire lifecycle and an API.

How awesome is that?

First of all, I would have a look at the demo that was presented at Interconnect. The actual API connect UI starts at around 2:00 in.


Honestly, API has so many cool things to it, that I’m not even going to try and mention all of them here, but here are some of my favorites.

  • Utilize Strong Loop capabilities to rapidly build API’s and Microservices using Node.js – Loopback and Express frameworks.
  • Model driven approach to create API’s. Map models to back-end systems using available connectors.
  • Unified management and administration of Node.js and Java runtimes.
  • Built-in support for Swagger 2.0 on API Manager portal.
  • Addition of new built-in policies to speed-up the API development and make developer’s life easy.
  • Assemble view on API Manager portal provides a visual tool for composing assembly flows.
  • Developer toolkit to interact with API Manager portal.


Did you catch that? Yeah, you an actually deploy to Node.js as well. API Connect calls this a Micro Gateway (I think that’s the name they use anyway..). So you can define and create your APIs in API Connect, and then choose to deploy to either DataPower on premises, or maybe as a bundled node.js runtime that you could feasibly deploy on something like OpenWhisk. Ok, that’s seriously awesome.

I’ll leave you with a final note on one possible use case for all of this.

You could access some data on a Z mainframe, model it as a REST API with no coding (or a few lines of node.js), set scope of access based on user groups, authenticate and throttle appropriately, automatically create Swagger definitions, code samples and snippets in various languages and then monitor the usage as you see your completely new API Economy based business takes off.

Screen Shot 2016-03-02 at 11.22.53

Today we announced our new offering, IBM API Connect, which integrates creating, running, managing, and securing APIs into one solution that can run on-premises and in the hybrid cloud; no other competitor offers this.

Yes IBM, yes you did. General availability is scheduled for March 15th, and you should be able to access from Bluemix then.

OpenWhisk: 本来のオープンなサーバレス時代を拓く世界初のサービス?


  • 完全な従量課金制(いわゆるユティリティーコンピューティング)
  • 裏にある物理インフラや物理的な制約を気にしなくていい仕組み
  • 柔軟にサービスを拡張したり、縮小したりすることができる。つまり、スケールアウトとスケールインのこと。
  • オープン性やポータビリティ
  • 初期投資と契約期間の縛りがないこと

私にとって、上記をすべて合わせて本物のクラウドサービスのではないかと考えています。だからこそ、先週IBM InterconnectのイベントでのOpenWhisk発表を非常に興奮してしまいました。










  • 仮想サーバを準備(=オートスケーリンググループのベースラインを用意。可用性を考えると最低2台になります)
  • アプリサーバのインストール、設定とパッチ
  • ネットワークのセットアップと設定
  • マイクロサービスのデプロイ
  • オートスケーリングの設定とフェイルオーバーのご確認
  • ログと監視機能を設定
  • サーバ(OSやミドル)のパッチや脆弱性の運用計画を作成と実施





























OpenWhisk: a world first in open serverless architecture?

The concept of the “cloud” has been around for a long time. While many, many definitions of it exist today, I think most of us can agree that a true cloud solution promises a couple of things.

  • Pay-as-you-go pricing and consumption model (aka utility computing)
  • Not have to care about underlying physical constraints and infrastructure
  • The ability to seamlessly grow, and shrink, your resources based on actual needs. E.g. scale-out and scale-in.
  • Openness and portability
  • No upfront investment or commitment

To me, this is what the cloud should be about. This is also the reason I was very excited when OpenWhisk was finally announced at IBM interconnect last week.

What is OpenWhisk?

The tagline of the OpenWhisk homepage does a good job of explaining it in about three sentences:

OpenWhisk is a cloud-first distributed event-based programming service. OpenWhisk provides a programming model to upload event handlers to a cloud service, and register the handlers to respond to various events.  It’s cool.

Let’s break that down into a more easily digestible format, and see why that actually is really cool.

Event-based services, why should I care?

Event-based services are really important in the context of “true cloud” as I mentioned in my introduction. The reason is because fully managed event based services only run when called (e.g. the event), and you are only charged for the actual execution of the code. So no up-front investment, and real pay-as-you-go.

How is this different than traditional cloud compute services?” you say.

Imagine that you are building a simple chat application, and you want to be able to let your users upload a profile picture of themselves. You could set up a database, like Cloudant, to store the profile and profile photo, and you would have to have a small cluster monitoring new photo uploads and do cropping and resizing as needed.

  • provision the “servers” (e.g. a baseline for an auto-scaling group) and plan for the capacity you think you will need initially
  • Install, configure and update your app server over time
  • Setup & Configure networking
  • Deploy the microservice
  • Setup auto scaling and ensuring that failovers are setup for high availability
  • Setup logging and monitoring
  • Create plans for Patching servers, app servers , Security vulnerabilities, etc

As you can see, even with virtualization and pay-as-you-go models of traditional cloud compute services, you still have to maintain your servers, do patching and logging, and all the other boring things associated with running an web service. Virtualized compute services only take you half-way to the promised values of cloud computing.

Event based services are inherently different.

With OpenWhisk you need to only deal with Actions and which are executed when an event occurs. Actions can be small snippets of Javascript or Swift code, or custom binaries embedded in a Docker container. Actions in OpenWhisk are instantly deployed and executed whenever a trigger fires, removing the need for clusters of servers.

In other words, you don’t have to worry about patching, monitoring, autoscaling, failover setups, vulnerability of underlying app servers and OS etc etc. You just think about your business logic, packaged as Swift, Node.js or Docker contained code. OpenWhisk will then execute and do the rest as needed.

Now we tick the “true cloud” boxes of seamlessly growing and shrinking, and not having to care about your infrastructure.

Putting the Open in OpenWhisk

Alright, event-based compute services are awesome, you get it. But there are already several out on the market, what makes OpenWhisk different?

A big reason I was so excited for the launch of OpenWhisk, was the open aspect of it. Many of the similar services claim openness, and actually just mean that you can interface through APIs, and deploy something like Node.js code. While that is a really important aspect, one thing that really sets OpenWhisk apart is that you can just hop over to github and start cloning the source code of OpenWhisk. That’s right, it’s open source. You could run OpenWhisk on Azure, AWS or even on premises if you wanted to. IBM believes that Bluemix will be the cheapest and best way of doing it, but it will not be able to lock you into their platform due to the open nature of OpenWhisk as an open source project.

Secondly, Docker support out of the gate. Run your OpenWhisk actions in a Docker container, making the risk of vendor lock-in irrelevant while providing the flexibility to re-use even legacy code in your event handling microservices.

Now we tick the final box for true cloud; openness and portability.

Common use cases

Alright, let’s dig into some of the more common use cases we can think of for an event-driven compute service like OpenWhisk.

Decomposition of applications into microservices

This to me is the big one. Many people are talking about moving into a microservice architecture, and to do that, you need a compute model that fits into the microservice line of thinking. The modular and inherently scalable nature of OpenWhisk makes it suitable for implementing granular pieces of logic in actions. For example, OpenWhisk can be useful for removing load-intensive, potentially spiky (background) tasks from front-end code and implementing these tasks as actions.

Mobile back end

A big reason many people are talking about microservices is the rise of mobile first. Many mobile applications require server side logic and compute. If you are a mobile developer, you will most likely know the Swift programming language used for building iOS apps. What if you could build the event driven backend microservice using Swift, that you already know and understand? I think you know where I’m going with this. You can. OpenWhisk supports Node.js, Swift and Docker containers.


Thinking about IoT, or the sensor networks of old, it is pretty easy to find a plethora of event driven thinking. In the world of IoT the pub-sub model is defacto standard, and OpenWhisk ties into that with it’s idea of triggers. For example, if I have a GPS and accelerometer module mounted in my motorcycle, I could send a trigger to OpenWhisk when someone shakes the motorcycle. OpenWhisk would then check the location of my smartphone and motorcycle, and send me a notification if I am far away. Don’t want people touching my stuff when I’m not around! I also don’t want to run a small autoscaled compute cluster to monitor this minuscule workload.

OpenWhisk architecture

While it should be clear what OpenWhisk can do at this point, let’s recap by looking at the basic architecture of OpenWhisk.


OpenWhisk architecture 

As OpenWhisk works of an event driven model, everything starts with a trigger. The more triggers fire, the more actions get invoked. If no trigger fires, no action code is running, so there is no cost. Think of it as a class of events that can happen.


Actions in OpenWhisk are instantly deployed and executed whenever a trigger fires. An action is an event handler — some code that runs in response to an event.


Finally, rules are the association between a trigger and an action.

Where to go from here?

Alright, now you hopefully understand what they meant by OpenWhisk being cool. Now you obviously would want to learn more about OpenWhisk, so head on over to:
If you want to see OpenWhisk running in IBM Bluemix sign-up for the experimental program here:
Since it is still experimental, you need to sign up, and wait to get whitelisted, but by now it shouldn’t take too long. Then all you need to do is start playing around with it!

OpenWhisk、Swiftランタイム、GitHub EntepriseとVMware対応を発表 [IBM Interconnect現地レポート]



OpenWhisk – オープンソース(!)のイベント駆動コンピュートサービス

Bluemix OpenWhiskというものが発表されました。イベントドリブン(イベント駆動型)のコンピュートを支えるサービス、そしてオープンソースのソフトです。コンセプト的にLambdaに近いのですが、幾つかの差別化になるものがあります。





12月に発表されたSwift Sandboxに加えて、Swiftのランタイムが今回発表されました。Swift言語でアプリ書いて、そのままIBMのクラウドにデプロイができるということだそうです。CIとCDのパイプラインとも連携されます。上記のOpenWhiskでのSwift対応もそうですが、いろんなところでSwiftとNode.jsを使えるようになってきています。基本的にフロントエンドデベロッパーが慣れている言語(JSやSwift)をバッケンドのクラウドネイティブアプリを簡単に書けるようにするのがコンセプトにみえますね。

Github Enterprise

知っている方が多いのではないかなと思いますが、Githubが提供するエンタープライズ向けのプライベートGitレポジトリーサービスはGithub Enterpriseと呼びます。Github Entepriseを使えるようになって来たのも、まさにハイブリッドクラウドの戦略とマッチしています。



今までのVMwareパートナーシップをさらに強化し、新しくvSphere、Virtual SAN, NSX, vCenterとvRealize Automationをサポートするという発表内容でした。簡単にいうと、IBMとVMwareが共同でハイブリッドクラウドを提供ができるようになります。VMwareを捨てたくないが、クラウドに行きたい、というお客様にかなり嬉しいニュースかもしれません。

Cleversafe? [IBM Interconnect現地レポート]

今週IBM Interconnectという大規模な年に一回行われるIBMのイベントが開催されます。時間が許す限り幾つかの面白いセッションをブログにまとめたいと思います。





KDDI + Cleversafeは共同プレゼンもしているみたいです:



永続ライセンスモデル (Perpetual licensing)


消失訂正 (inline erasure coding)

自動暗号化 (Zero-touch encryption)
No need for third party encryption or key management







性能もパブリッククラウドのベンダーのオブジェクトストレージと比べて、かなり向上されるようです。なんで速いかという細かい議論にはならなったのですが、シングルテナントと独自アルゴリズムの気がしますね。他のサービスと比べて特にSequential readが速いというコメントもありました。