More devices. More apps. More data. Both personally and professionally, we all have an ever-expanding collection of software, websites, and apps. Some of us even have internet-connected stuff like a Nest or a Fitbit. All of this to help us manage our work and personal lives.
The conversation since the early 90’s has been focused on Big Data. As our lives and work days become fully digitized, the amassing of this data and the analysis of it has been considered the key to success, going on three decades. But the Big Data wonks are missing the impact of all of these inter-related devices and apps. There has been a sea change. It’s no longer about Big Data, it’s about Fast Data. What use is all that data if you can’t get it to talk to other data fast and without hassle?
In a world where we have about a million places to check to look at all the data we need to run our lives and organizations, we frequently fail to find easy ways to connect it all together. The result feels like chaos and disorder. That translates to disfunction in the everyday world. An emergency room can’t get your medical record from a different hospital when it needs it most.
That is why Interoperability is an increasingly important, valued, and expected element for software and internet of things (IoT) enabled hardware. The definition of Interoperability is “(of computer systems or software) able to exchange and make use of information” and those exchanges are making the day to day easier for major companies and individuals alike. Consumers can get aggregate info more readily, medical professionals can make connections between relevant patient information more effectively, and companies can start up new services more quickly. The age of Fast and Connected Data is upon us.
There are three main areas where Interoperability is having an increased impact on the way technology is developed and consumed to create a Fast Data paradigm: communication, data standards, and infrastructure. Let’s go over each of these with a little more info and a few examples.
One major way that the connection and communication between software tools and apps is increasing is through APIs (application program interface).
APIs are nothing new and are considered core functionality for many these days, but their increasing usage is still changing the game for many industries, like healthcare and industrial automation.
Chances are that you are already using software that leverages an API. For example, if you got an email from your hotel before a trip and it included the weather and and a map, it is likely the weather came from the AccuWeather API and the map/directions came from the Google Maps API.
One of the major barriers with being able to make use of the data exchanged by the software and apps out there is a lack of standardization around how they handle data, but there are movements in multiple spaces to address this issue.
Libraries have had this right for a while when to comes to their catalogs, for example their use of MARC to create consistency in their catalog records.
The medical industry is starting to face this issue head on, thanks in part, to the requirement for Electronic Medical Records (EMR) in the Affordable Care Act. Simply designating gender in a dataset, male-female, M/F, 0 or 1, can be a confusing issue if two different systems can’t agree. Then you need an interface to map the two. Or you just have two different data sets that have to be maintained separately, introducing the high likelihood one of them will fall out of sync with the other, leading to bad outcomes (and frustration on the part of users, consumers, or members)
Historically, medical information was cataloged and stored in vastly different ways from one provider to the next, making the combining of this data into one medical record incredibly difficult. FHIR (Fast Healthcare Interoperability Resources) is working to change that by providing a standards framework to create consistency for medical information.
Another barrier to interoperability is the inability for different apps and software tools in an organization to work seamlessly together due to differences in infrastructure (devops structure), language, and semantics.
This also holds as a barrier for different organizations, such as when different organizations want to share a resource but they have different server configurations, languages (Java vs. Ruby), etc.
However, the rise of self installed, self contained environments (referred to as software containers) are assisting with overcoming these barriers. Docker is the leader in this space and is an open source tool that automates the deployment of applications.
From Docker’s site:
“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.”
In the end, this means that developers can create and share apps without having to worry about the dev environment or the language used for other apps or by other organizations. That equates to Fast Data.
The demand for interoperability and the ways in which we can get our software and IoT enabled hardware to integrate will only continue to increase as people rely on more and more applications for our work and personal lives. That means the way we write software will be changing dramatically to accommodate these new approaches and end users will having increasing expectations about the ways that their technological tools relate and integrate with one another.
So the question is, what are you doing to setup Fast Data practices in your organization? What needs to change?