It’s not every day that you hear about a software project on public media, but NPR and other public outlets are covering the troubled rollout of the Healthcare.gov website nearly hourly. As a software professional, the problems I was hearing about are common in a large software project, where multiple pieces of the final product are built independently and then integrated together at the end.
The practical problem here is that it is too easy for disparate contractors working on just their piece to even understand how the whole will fit together. In fact, the nature of computing and programming relies on this to some extent: Treating individual components as modules assumes a certain amount of ignorance on how inputs to one particular module are derived and where outputs are used in other parts of the system. This means developers can focus on making their piece meets the appropriate functional and non-functional requirements which makes them “cogs in the machine.” All of them are performing essential functions, but can’t see the forest for the trees. They can’t step outside of their own cog.
This type of result can actually be predicted. In 1968, Computer Scientist Walter Conway described an assertion later known as Conway’s Law: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.
This implies that the resultant software system will inherit communication (or non-communication) properties of the organization that designed it. In this case if we had dozens of private contractors with inadequate communication, you will end up with a system not properly tested end-to-end, which is exactly what happened here. Further, testing that occurs only ‘at the end’ of a software project is reminiscent of a waterfall software model, which is great for designing nuclear missiles, but extremely bad for designing a dynamic, highly scalable software system with heavy user-interface and usability requirements like Healthcare.gov.
Before the public fiasco, I mused that an Obamacare API and an API Management architecture might be a good thing based on lowered expectations of a smooth rollout of Healthcare.gov. Now I think it’s more than a good thing, API Management just might be a savior. How? Rather than build a user interface, the government should have made an API and had the contractors compete to build the best interface. Here, the API could be a RESTful API launched as an open API allowing anyone to take a crack at using it to make the best possible experience for the user. This architecture cleanly separates the concerns – the government runs the server side and manages the API, data and transactional services and someone else writes the client piece.
For the uninitiated, API here is a programming interface that represents just the server side of the Healthcare.gov functionality. The API would consist of a set of interfaces that provide all of the necessary data and transaction methods to allow a client consumer to purchase healthcare through the exchange. It could use well-established, highly scalable technologies such as an API Management Gateway for handling traffic and API Catalog and Developer on-boarding portal for on-boarding public and internal developers. For reference, Intel’s API gateway can handle over 18 billion calls per month, per node. Moreover, the current technology offerings for a developer catalog and portal would effectively allow internal developers working at the government to compete with external developers to build the best user interface.
The best part about this approach is that the government would not have to worry about the user interface and client experience. This could be left up to people who know how to design great user interfaces and would open the way to making the Healthcare.gov application available not just through a browser, but with an HTML5 or native mobile application. This is a true win-win. The government won’t be blamed for a bad website and consumers get the best possible experience.