Swift Heist, Apple Pay and APIs

It’s been a while since I last posted, and lots has happened in the payments world. Today I reflect on 3 items of interest: what happened in SWIFT; why the Australian banks are trying to bully Apple by forming a cartel; and the most exciting development in open access to financial institutions – APIs.

It was simply a matter of time…

There has been a lot said lately about the security of interbank payments. Traditionally these payments flow from bank to bank via the SWIFT network.

The Heist of US$81 million in February this year from the Central Bank of Bangladesh was just the start. Since then we’ve seen more: a Ukranian Heist of US$10 Million; Vietnam’s TPBank were lucky and blocked an attack; Eduador’s Banco del Austro lost US$12 million; a foiled breach of the accounts of the Union Bank of India to name a few.

It is important to note that the SWIFT network itself wasn’t hacked (that’s like saying the ‘Internet got hacked’) – what happened was that the fraudsters attacked the ‘endpoints’ – in other words the ‘terminal’ or the ‘PC’ that is sometimes used to input payment instructions into the network.

Fraudsters have moved from trying to ‘infect’ consumers or small business PC’s for small ransoms and are moving to the bigger fish – the banks.

In essence, fraudsters are using a mix of social engineering via email to trick bank employees into opening up a seemingly harmless ‘invoice’ or other type of ‘attachment’ that then secretly install a malicious payload onto their PC. This payload then waits in the background until it sees the users trying to do certain actions – say create a SWIFT payment – and then injects itself into the middle of that transaction to create a fraud. Worse still, the ‘malware’ steals the users user ID and password credentials so it can do its own thing.

Interestingly, the malware is sophisticated enough to try to cover its own tracks so by the time the bank finds out something is wrong it’s too late.

These methods take advantage of lax access and endpoint security within some organisations coupled with a lack of access controls and checking procedures. No bank should allow a single actor to create, authorise and release a payment (especially a multi million dollar one).

Luckily most larger banks don’t allow ‘terminal’ based access to the SWIFT network – ideally these things are embedded within data centres far away from unsuspecting users and use other ‘systems’ to create/validate/authorise/release payments. But as we have seen, some banks do.

For their part, I feel that SWIFT could do themselves a favour and introduce a set of base security standards for banks who use the system and seriously back this up by an accreditation program where each connected institution has to pass a series of risk based tests on a periodic basis. If you pass then good – you stay connected. If you fail – then you lose your connection until you pass. That is not to say that SWIFT have been sitting idle – they have in fact been very proactive with banks in terms of recommendations of security configurations. 

Banks on the other hand should be reviewing their payment practices to ensure that adequate access controls exist in their payments mechanisms and appropriate endpoint and perimeter security is employed.  Only a combination of efforts from all parties will prevail in combatting this challenge.

But as I said, it was always a matter of time. Someone was always going to try and attack a big fish……

I want access to your system Apple – or I’m going to see the Government …

In Australia, a collaboration of some major and minor banks have applied to the competition regulator for ‘permission’ to form a Cartel to collectively boycott Apple Pay in order to negotiate with Apple about access to the NFC chip that sits inside the iPhone.

Boo Hoo!

I think it’s disgraceful.

What’s this all about though?

With the release of the iPhone 6 Apple introduced an electronic wallet capability called Apple Pay. Originally launched in the USA in 2014 this meant that you could store your credit/debit card on your phone and use your phone at EFTPOS terminals equipped with tap and go contactless capability instead of using your actual card.

Google had also introduced NFC functionality previously within Android based smartphones. The implementations of the capability between Google’s Android platform and Apple’s iOS were different.

Android’s platform was open. Apple’s platform was closed.

What this meant was that Electronic Wallet developers/suppliers (such as banks) could ‘get’ access to activate the NFC chip in Android phones – meaning that they could develop their own capabilities to control the customer experience. Another important point in this was that they also didn’t have to pay Google to do it.

Apple’s implementation of Apple Pay was closed in that Apple did not open up access to the NFC chip to developers. Apple argued that this was to maintain and uphold the integrity of the embedded security around iOS and the iPhone. And Apple takes security seriously. 

In order to use Apple Pay banks and other financial institutions had to do this in partnership with Apple, and pay Apple a fee to do it.

Apple Pay was launched in Australia in November 2015 by American Express. In April 2016 the ANZ Bank had secured a partnership to launch Apple Pay. Since that time no other Australian bank has launched with Apple Pay.
Why?

Well, if you believe the submissions to Australia’s ACCC, the other banks wish to develop their own electronic wallet offerings and have greater control of their customer experience. They say that Apple is stifling innovation in this area and in order to have a level playing field then Apple should open access to the iPhone NFC chip.

Sounds like baloney to me. The biggest banks across the globe in the USA and UK have partnered with Apple. ANZ in Australia has partnered with Apple. These banks have also developed wallet applications for Android users too.

If you’ve used Apple Pay the user experience is fantastic. It is integrated with the iPhone and Apple Watch and it works great. I have, and I feel very comfortable with the experience and ‘security’ that I get with the platform. (I wouldn’t use an Android phone to make payments though. Too many security vulnerabilities in that ecosystem – remember access to NFC in Android is open whereas in iOS it is closed).

No, the other banks aren’t really interested in customer experience. This isn’t about stifling innovation. The banks are simply interested in the money. That’s what this is all about – the cash. At the end of the day they want direct access to the NFC chip so they don’t have to pay Apple to use it.

At least for now, the ACCC have declined the application by the banks to form a Cartel.

So, for the time being if you wish to use Apple Pay in Australia go to American Express or ANZ bank … because it will be some time before the other banks jump on board … they care so much about your customer experience and innovation that they’re prepared to not give you a solution at all instead of giving you one from Apple – because of course we all know that Apple are minnows when it comes to customer experience and innovation. (Being a little sarcastic there). It’s Crazy.

APIs – opening up the world to a future of capability

The proliferation of APIs within financial services is arguably the most promising advancement that I’ve seen of recent times.

An API is a piece of software that ‘talks’ to another piece of software. They are not user interfaces. They run behind the scenes, making apps and websites seamless and useful.

Using APIs means that developers don’t have to start from scratch. They are re-usable and flexible.

They also help create loosely coupled ecosystems.

By way of example:

  1. Think of an Apple power pack for a Mac. One end works all around the world. You just plug it into your Mac. To make it work in other geographies you only need to buy the right wall plug. You don’t have to buy the entire unit.  OR
  2. Think about Lego. The Lego system makes it easy for stuff to connect to other stuff. You could buy a dinosaur Lego and a car Lego and connect them together to form a ‘Car o saur’. You don’t need to worry about them connecting together because the system guarantees that they will work together.

This is how APIs work. And we use them everyday.

Have you ever uploaded a photo from your phone direct into Facebook or Twitter? If so, you used an API. It was there all the time, just sitting behind the scenes.

Alternately, what about using Uber. Uber didn’t create their own Maps system to make Uber work. They used existing maps providers such as Google and Apple. The maps API made the Uber app more useful and seamless.

From a corporate perspective I can see APIs being used extensively in the future. PSD2 in the EU zone is an example of where APIs will drive integration between banks and Fintechs/Corporates. In Australia the introduction of the New Payments Platform (NPP) will also drive it due to the nature of real time payments and the associated real time receivables. PSD2 impacts will also influence the way we use them in Australia. APIs will drive ERP systems integration, real time liquidity, bank reconciliation and receivables management, even perhaps the opening up of new bank accounts and servicing.

APIs will also help us unpack and leverage capability from developments in the use of Blockchain (more on that in another post).

The potential is endless.

Watch this space. APIs are the fuel of the future.

The Opportunity in Corporate Payments

There is currently a significant amount of focus on consumer payments which is understandable given the success of ventures such as PayPal, and the enablement of consumers to make payments easily by way of online credit card gateways and the like. Silicon Valley payments startups are trying to capture the massive volume in enabling consumers to pay easily.

Banks used to hold this domain with their customers.

The space changed between 10-15 years ago through the rise of the internet, and the slowness of many banks to respond to the needs of business who wanted to move their business online and then needed a payments gateway to facilitate the payment. Banks started to separate the ‘capture’ of the transaction to the ‘clearing’ of the transaction.

This meant that some (but not all) Banks sent their customers to newly found ‘credit card gateway’ companies whilst continuing to offer the underlying ‘merchant’ facility to the customer. This mean that the credit card transaction was ‘captured’ by the online gateway provider and ‘cleared’ by the Bank. Online gateways were agile, could develop programming interfaces (API’s) that allowed merchants to integrate their online shopping carts with the credit card system to enable payments. Banks were typically big and slow, and whilst offering a very reliable merchant service that underpinned the card transaction, they weren’t very agile at developing on-line API’s for merchants to integrate, and were less agile (or it didn’t even fit their business model) to deliver integration assistance to their customer base.

Nowadays then, if you are online merchant the typical model is to buy your card capture product from an online gateway (or even PayPal) and have that gateway integrate with their chosen Bank’s back end merchant facility.

The Bank part is a necessity (and taken for granted), the capture part is what the customer really needs. In essence, that is the product being purchased. Online gateways have been very smart, continually adapting and offering value added services (such as card reprocessing, tokenization for PCI DSS compliance), integration services, out of the box shopping card modules and the like.

What once was the domain of the Bank has now been commoditised and disaggregated from the Bank’s own offering.

As we move into corporate payments processing, it is important to examine the way that corporations ‘tender’ their banking business to the banking community. Traditionally, a corporate would issue an RFP for the provision of banking services. A shortlist would be created, various Banks would tender and more often than not a sole provider for the provision of Transaction Banking services would be selected. That provider would normally win the right to provide core banking services (such as bank accounts, liquidity (Debt), FX) and ancillary ‘value added’ services such as payments processing, merchant processing, supply chain, corporate card programs and the like.

When the GFC hit, corporations then had to diversify their banking relationships to reduce counterparty risk. This meant that corporates had to now spread their banking services across a number of banks, each relationship meaning a new online banking site, many and varied authorisation dongles/tokens, a different bank for a different region and so on. Each bank had their own particular way of doing Host to Host services, ERP systems integration, Treasury Management, Payments processing, receivables and the like. This has created a headache for corporates wishing to be more efficient in payments and treasury operations as most banks have their ‘own’ proprietary way of achieving this processing. The industry has done its best in agreeing on new standards of processing such as ISO 20022 but not every bank has adopted these standards, and most banks don’t agree on the bilateral mechanisms residing within the standards to achieve efficiency in payments processing. In essence, each integration with a Bank is a new project.

At the same time, most Bank’s haven’t changed their delivery model of payments processing product. They still respond to tenders for Transaction Banking solutions by offering the core banking services tied in with the ancillary value added services. This then by its very nature creates a further web of a corporate having to buy its banking product from many providers to simply do business.

What if you could by your banking product from a universal/independent provider (especially in payments processing and integration) and then let that provider do all the back-end ‘stuff’ with your chosen bank in your chosen region? How would that change your perspective?

To use the previous analogy, instead of the Bank doing both the payment ‘capture’ and the payments ‘clearing’, why couldn’t another provider build a payments ‘capture’ engine and then let the chosen bank do the ‘clearing’. With the world becoming more real-time and ‘instant’ there are methods to connect to banking services now that mean that a corporate could buy their product not from a bank and instead buy it from an independent supplier that is connected to the bank for clearing.

My hypothesis then is that transaction banking payment ‘product’ (capture of the payment) will become disaggregated from traditional bank ‘clearing’ systems. Bank’s will find themselves competing with tech companies for payments processing and for other value added services that simply need a smart way to connect to the bank whilst at the same time increase efficiency and reducing complexity for the client.

This creates an opportunity for tech startups to be innovative in payments and integration processing, aggregating between banks/corporates and using best of breed agile technologies to create efficiency for corporates who just want to worry about their own business instead of worrying about the Bank processing their payments.

The opportunity in Australia alone is huge – look at the below table:

High Level Transactions Statistics (apca.com.au)

Volume Value Users
DE (Credits) 5.3m/d $24.3b 307,027
DE (Debits) 2.5m/d $19.9b 24,164
Cheque 0.7m/d $4.8b
High Value* $96.5b
Debit Cards 312.1 m/m 18.1b/m 25.8m^
Credit Cards 164.9m/m 22.1b/m 26.5^

*These figures are values exchanged and do not include “own items”. Note also that a full picture of RTGS transactions would require HVCS transactions to be supplemented by Austraclear and RITS transactions which are not captured by APCA.

^Customer Payments Accounts cover day-to-day accounts and include: cheque, statement, savings and passbook accounts.

Banks will need to think differently about how they offer innovative product to market and the speed at which they do it. Their ability to change quickly to meet client demands will become increasingly important – especially if my hypothesis is true and they’re competing against tech companies.

Watch this space. It will be very interesting.

What will Australia’s New Payments Platform (NPP) leave as roadkill?

I must stress, before you start reading, this article then represents my own thoughts and no ‘corporate’ hypothesis. This Blog has been in draft for too long, and I’ve sought a fair degree of feedback along the way (thanks to those who helped mould my thoughts).

There is a new payments system coming into Australia in November 2016. Its called the New Payments Platform (original huh!) or NPP.

Why is it new?

Because the last payments stream introduced into Australia was Real Time Gross Settlement (RTGS) in 1998 some 16 years ago. The year before we had the introduction of BPAY (which isn’t actually a clearing stream, rather a product residing on top of a clearing stream). Both are still in use today, RTGS being more used in the corporate scene (consumers can use it over the counter at their bank branch), because BPAY was intended and is used for consumers to pay Bills. So, NPP is new, well, because in comparison the other stuff is old.

You can go to the Australian Payments Clearing Association (APCA) site to view the history of payments in Australia and look at how ‘old’ the other payments systems are.

Perhaps the other reason its ‘New’ is because the intent of the system is to be 24×7, with payments being settled between bank accounts at different banks within seconds rather than the current scenario of up to 30 minutes for RTGS and next day for Direct Entry. Additionally – and perhaps more importantly – the NPP will allow a significantly greater amount of ‘data’ to flow with each payment. NPP will be based on the ISO20022 standard for payments. Presently the payments streams only allow a very limited amount of ‘data’ to flow with the payment and over the years this has presented significant challenges to the industry especially in areas of payments reconciliation and the like.

For example – have you ever wondered why your bank only allows you 18 characters of ‘remittance’ information when you make a payment? It’s not because their on-line banking systems are unable to – rather it’s because the payments system that underpins your EFT transfer was built in 1974.

Back in 1974  – Gough Whitlam was PM of Australia, The Number 1 song was from Barbra Streisand (“The way we were”), Australia’s first Credit Card (Bankcard) was introduced and John Lennon made what would be his last stage performance in New York with Elton John. Oh, and 200 MB of Disk Storage cost the equivalent of US$186,000 today. In 1974 the storage in the most basic level iPhone would cost you nearly $6 million today! Try and get that on a 24 month plan from your Telco.

The point being that when the Banks designed the EFT clearing system, disk storage cost lots – and so you only got 18 characters to tell the person you were paying what the payment was for. And that same system is still in use today. In fact, it underpins the current on-line banking systems of most banks, credit unions and building societies in Australia processing 7.9 Million transactions per day worth $44.2 Billion (Source: APCA)

But what will happen when NPP commences? What payments systems will it kill on the way to Glory?

A few factors need to be taken into consideration;

  1. Will the industry impose transaction or processing limits for NPP?
  2. What will a payment cost?
  3. Will both parties (the payer and the receiver) be able to participate in NPP even if only 1 party has ‘paid’ for NPP enablement.

What would happen in Australia if there were to be no limits for a NPP Payment?

Most countries that have introduced a ‘faster’ payments system have also introduced a ‘system’ limit (or a set of system limits) for the use of that system.

Each day in Australia, on average over the last 12 months, (according to the RBA) the ‘system’ processes ~41,500 RTGS payments. If the context of a RTGS payment is to ‘make a near real time, non reputable and settled transaction to a beneficiary’, and the context of a NPP payment is more or less the same – what would be the continued need for a RTGS payment?

Perhaps not. Casualty number 1 – the RTGS system. Hold on a minute…don’t Banks charge for RTGS .. a search of all the big 4 banks shows that the average price a retail client can get a domestic RTGS transaction for is around $35 for a customer of that bank (See example here on page 25). Now assuming that the Bank doesn’t charge the same fee to its business clients as it does its consumer clients – lets apply a generous discount of 50%. That still means that, combined, and conservatively, the Banks are set to lose around $190Million per annum in RTGS fee income if that channel is cannibalised.

I think that if the system doesn’t impose a limit, the commerciality of the Banks probably will. $190M is a lot of dough …. RTGS will be hit, and perhaps hit hard. You can argue that the ‘context’ of the payment will determine the clearing stream – perhaps – but there are lots of variables here – intra day limits, intra day liquidity requirements, outside normal trading hours liquidity, RBA Exchange Settlement Account balances etc. etc. Time will tell, but the RTGS stream is almost guaranteed to be hit.

What will a payment cost?

Interestingly this has relevance to the last question. My answer though had a hypothesis based not on cost but on context – in that a NPP payment had the same ‘context’ or put another way, the same ‘characteristics’ as an RTGS payment. Why wouldn’t you charge the same amount for the payment as you did for an RTGS payment?

The answer I feel is in the target market.

NPP plainly has a market in consumer payments, (Person to Person (P2P) payments are ideal for this new mechanism) but the market becomes a little more cloudy as you make your way up the value chain into Institutional type payments. Payroll and large batch runs of direct debit and creditor payments for example will more than likely continue in the short to medium term to be effected via Direct Entry or CS2 – as it’s the most efficient low-cost and high volume system that we have. Also, these payments are due (traditionally) during a working week (Monday to Friday). Use cases for Institutional type clients are less clear.

If we decide to concentrate on P2P payments then these are more than likely made via consumer on-line banking – either from a desktop or from a mobile device. How much do you pay for a payment on that channel today? … probably zero, nada, squat. Is the convenience of a 10 second payment 24×7 going to spur you to hand over more cash to your bank for making the payment? Probably not. People are lazy and most don’t even understand or care how a payment is made. What I can tell you is that people start to care when they have to pay for stuff, and if an on-line payment costs ZERO today compared to SOMETHING tomorrow most people will start rubbishing the banks. The banks might say that you are getting a better service with NPP compared to before .. some might buy that .. but most won’t care and therefore won’t pay and will then shop around.

I’d expect most banks to replace their on-line banking clearing systems with NPP. Casualty Number 2 – Direct Entry. It won’t be a big casualty though. Not initially at least. But in time. Firstly however lets see what the banks price NPP at, as that will be a big determinant and maybe, just maybe, Banks could charge for NPP if they start to differentiate their payments to ‘immediate’ (NPP) and ‘delayed’ (Direct Entry). Perhaps people will be prepared to pay for an enhanced service when compared to the old way. Everyone nowadays wants stuff ‘now’. A friend recently reminded me how people get annoyed these days because you don’t immediately respond to ‘iMessages’ when the sender can see that you’ve ‘read’ the message. We are evolving into a ‘now’ community – this has relevance to NPP. People will want to see their money ‘now’ too.

I’d expect though that P2P payments for consumers will be fee free. The first bank to do this will set the market, and others will have to follow.

Will you be able to take part?

That’s interesting, and perhaps depends on the ‘what will it cost’ question. If there’s a charge – what happens if you’ve paid for the payment to be made and the receiver hasn’t paid his/her bank to make the service available? Will that bank put the money into the account immediately, or will they defer it?

Bank’s will want pay back on the massive investments made in a new clearing system, unless the ‘system’ determines that the cost is zero … and if there’s no charge then the point is moot as differentiation (as a fee for service) disappears.

Fraud

Another factor needs to be taken into account for NPP also – fraud. In a system that offers immediate non repudiable payments, from bank to bank, you can bet that the fraudsters will be out there trying to hack a way in. I can see them now just waiting at ATM’s for the money to arrive. Bank’s will need to invest in a lot of state of the art fraud prevention systems to protect themselves and their customers from fraud. It’s a big deal. BIG.

Data Services

Having said all of this, lets revert back to something I mentioned up front of this article – Data. NPP is based on ISO20022. Lots of data can flow with the payment – almost unlimited exchange of data and payment ‘attachments’ can be made. This might not mean much to a consumer (after all, you probably know what the money that ended up in your account was for), but it means a lot to businesses. They rely on this data to reconcile who paid them, and what it was for. Casualty Number 3 – BPAY. Previously you never had a comprehensive way to identify your payment inside a payment. BPAY in Australia solved that to some degree by going ‘outside’ the normal payments system and introducing Biller Reference Numbers that were based on a check digit routine so you couldn’t stuff the payment up. If the payment reference didn’t validate up front then you couldn’t make the payment.

With NPP though you can send a heap of data with your payment. You could even send your picture to them. I wouldn’t expect BPAY to lose much ground though – it’s still very efficient and well understood. However those utility type companies (such as telcos, councils etc) who are innovative and develop nifty online ‘overlay’ services to go with a NPP payment could perhaps offer ‘immediate’ reconciliation that accompanies the payment. They wouldn’t need BPAY anymore. That’s just one example.

Summary

As we move forward into NPP then I think that there will naturally be payments stream casualties. Some more affected that others. It’s not all bad news for clearing streams though because a big opportunity opens up via the Data and the overlay services that go along with NPP Payments. We never had a system before that had the potential to wrap so much data inside the payment mechanism. NPP does this well. For consumers the data may be irrelevant and the payment is most important, but for businesses the data is most important and the payment may become secondary.

Don’t get me wrong, it’s all based on the payment – we all need the money – and we want it ‘now’ – but data and online ‘overlay’ services have massive potential. The banks are having their ‘cheese’ moved, and they’ll need to work on fresh business models to keep relevance in an increasingly customer centric world and make up for their lost fees in other ways.

 

 

 

 

Disruptive Technology and the Cloud

I did a presentation recently in Sydney. It went pretty well, by that I mean I was happy with what I had to say (and so were others).

The topic was “Innovation in Payments” – nothing new in my presentation apart from some of my thoughts on the Australian New Payments Platform (NPP) and “Least Cost Routing”; but along the way I’ve been asked about my thoughts on disruptive technology.

Somewhere along the road I’ve also been involved in discussions around the cloud and its impact on disruptive technology.

Having said all of this I still think that the Gartner predictions are correct – INTEGRATION outside the corporate firewall over the next 24-48 months will be key to the success business’ will need in the future.

In Financial Services there will be for some time a desire to keep things in house. No surprise there. The increasing challenge will be how much of it stays in house and how much can be done elsewhere. What is at the root of this desire though – is it protection of customer data, or is it something else that needs to stay inside, and how much of it needs to stay inside? (By the way, lets stay right away from arguing with regulators – that’s an exercise in kicking yourself in the head; one where you end up with a blood nose and they end up getting new shoes).

Is it the cloud that is nasty? Couldn’t you just say that the cloud is just another name for a ‘hosted’ service, or a ‘managed service’ in much the same way that ‘Digital’ now used to be called ‘online’ and the before that ‘eBusiness’, or ‘e’ this or that. Is it any different? Really? For example, isn’t SWIFT a ‘cloud’ service? and if its not, why not? (or are you going to say that SWIFT is a ‘hosted/managed service’). Leave that debate for later.

For the time being though, lets just call the opportunity then the ‘cloud’. Forgetting about the security concerns (do they really exist anyway?) What will the cloud let us do tomorrow that we can’t do today. What is the advantage that cloud could provide us with?

Lets go through a few topics:

Governance – would that be any different? Maybe – you have different end points and your relationship with the infrastructure provider might be different – and that would take oversight. Data Realms/Sovereignty might be impacted, and Security of data, So I think Governance in the cloud would need to be increased, compared to managing your own infra on site. Maybe the controls are different – maybe thats an opportunity in its own right for a clever security/risk management provider.

Software SDLC – Dev methods would need to change. Software needs to recognise a different way of residing on infrastructure. Older stuff might be okay in terms of ‘living’ on named instances, but one of the benefits of a seemingly never-ending infrastructure capacity is that your software should be able to take advantage of infra when it needs to. Spinning up new web/app/database servers when software demands, and then turning off when not required is truly ‘on demand’. Software needs to know how to do this and more importantly when to do this. Developers need to be building hardware ‘cloud’ abstraction layers inside their apps, or abstraction layers for legacy software in order to do this. So the Software SDLC is impacted also. Unless you don’t care – which in that case don’t do a thing. React to failures in the traditional way and waste your money on traditional incident management. Get it right though and you could argue that you don’t need incident management because the incidents are ‘programmed’ in and are automatically dealt with.

Infrastructure Provisioning and Management – Biggest impact. Why would I want to own a server when I can ‘rent’ a virtual one, and only pay for the time its actually in use. Use of the ‘Cloud’ transfers often huge capex bills every 3/5 years to an opex outlay. That has good and bad aspects to it though. Opex hits your P&L in the year of build. Capex doesn’t. In some countries you can capex and not have the impact hit your P&L (by way of depreciation) until you commercialise – and that could be years away. On other matters, you can use automation tools to clone/copy/spin up new servers in minutes. You can use API’s from the Software SDLC to control them. You can use the same APIs to configure networks/firewalls and other security devices on the fly. You only pay for what you need. Giving Software developers access to infra for their stuff to live on in a timely manner is a HUGE precursor to agile innovations. This gets a big tick – but don’t forget the Governance impact above. How do you define “in the cloud” as an ‘as built’ infrastructure from a documentation level – especially when it can dynamically change so rapidly.

Reliability/Redundancy – Another big impact. Its a bit like ‘use the force Luke’. ‘Let go’, ‘Trust your instinct’. You have to trust that your provider has this in hand. Or build it in another cloud somewhere else. I think the cloud has untapped potential in increasing up time and reliability. Netflix used this to their advantage in the deployment of a Simian army (http://techblog.netflix.com/2011/07/netflix-simian-army.html) that randomly tests their environment for redundancy. Have a read – its a great concept – and they build failure into their daily life. Having to get up at 3am and fix servers and stuff has now gone away. Organisations that build this concept into their infrastructure management plans just ‘deal’ with incidents as they happen without having to worry about key services being down. Everyone should latch onto this initiative. Traditional DR will continue to serve a purpose but get the ‘link’ between hardware and software right in a way that uses the resources properly and your DR will turn into a 1 hour verification rather than a run book that humans can stuff up.

Security – An area with the biggest Question Marks. How can you transfer all of those security appliances and protections to the cloud in the same way as you can with physical. I need to do more research on this, and this is perhaps one area that lends itself to the most innovation from cloud/security appliance providers.

Cost– All of the above. You will increase costs in some areas, and decrease in others. But the decrease should by far out way the increase. Jump on it I say.

Now, thinking about all that, let’s answer the question I posed:

What will the cloud let us do tomorrow that we can’t do today. What is the advantage that cloud could provide us with?

In short:

  1. We can do everything we do today (or should be able to)
  2. We should be able to lower the cost of infrastructure, and transfer the context of its impact to the Balance Sheet
  3. We should be able to build in reliability and redundancy, making applications look like they can ‘self heal’ themselves if part of the infrastructure fails.
  4. There are questions about security
  5. We should be able to develop and deploy in a way that fits a modern AGILE environment
  6. We should be able to do all of this, ideally, at a lower cost.

But wait on, Im still missing something.

Didn’t I talk about the opportunity to do something disruptive?

The cloud itself won’t manufacture disruptive technology. You still need the ‘great’ ideas for that. The cloud however should provide a ‘platform’ for disruptive technology to live. A Launching pad. You still need the great software, the great products – but in the future they will ‘live’ elsewhere.

Enough for now.

Until Later

Leigh