The elegance of the end-state of harmonious synergy between software and data belie the complexity and chaos of the transition we find ourselves in the midst of. Not that long ago, there was a general balance between the pace and sophistication of software delivery and the speed and maturity of data analysis. Markets and customer bases grew steadily allowing organizations to keep pace in their software and data competencies to handle increasing mutual scope and complexity.
However, the consumerization of IT—the ability of lay consumers to consume sophisticated software experiences from browsers and mobile devices—unleashed access to a massive, eager and demanding customer base measured in the billions. This gave birth to the current wave of digital transformation. Along with it came the race to get to market, i.e. build and deliver high-quality software exponentially faster, in order to capitalize on massive global opportunity.
Innovation thus started in the software domain. With faster, better software came the need for purpose-built data stores (document databases, graph databases, etc.), the need to manage the velocity, volume and variety of data collected from millions (or billions) of users and the need to extract insights and value fast enough to further fuel the pace of software innovation. Thus, software innovation planted the seeds for the current focus on data innovation.
While the innovators quickly traversed this software journey, most enterprises are only now fully in the midst of embracing new software methodologies and technologies to deliver software quickly and effectively. Those enterprises at the vanguard of the software journey are now embarking on the data journey—to find ways to safely and quickly harness the value locked away in enterprise data assets. This includes everything from analytics to rapid provisioning of copies of production data for software testing. In fact, 451 Research states 86% of companies that consider themselves mature in DevOps journey are planning on increasing their investment in DataOps. That is, companies that have traversed the software competency curve are now embarking on the data competency curve.
The Dawn of the CDO
The growing traction by data science, AI, analytics and DataOps technologies and vendors point to the rising importance of developing a data competency. The recent emergence of the Chief Data Officer (CDO) is the culmination of this trend.
Given the nascent focus on data at most enterprises, most CDOs are still grappling with defining their role, navigating the dynamics of the C-Suite, defending against regulatory and compliance risk and justifying their role by identifying growth opportunities. Despite these immediate challenges, they do have one thing going for them: they know they’re in the midst of mind-bending change, have the benefit of hindsight on the software journey and the many parallels it carries, and are mentally prepared for the organizational, technological and cultural changes that will need to be navigated. Most companies were stunned by the sheer suddenness and velocity of change that came with the DevOps and the software transformation journey. With DataOps and the data journey, they are now aware, prepared and committed to the associated journey and changes for the long haul. They can be proactive and avoid being reactive.
Planning for the DataOps Era
As cycle times for going to market and learning from the markets shorten, and innovation cycles accelerate, CDOs can expect that data-centric organizations will need to drive net-new top and bottom-line results without incurring undue risk. This will come through adoption of new technologies, organizational structures, cultures and workflows. New roles focused on data-centric activities, and new forms of collaboration between different personas will emerge.
In the modern enterprise, production will no longer be the most important data environment in enterprises. There is a growing need for fast access to secure, governed data outside of production—where the work of the rest (non-IT part) of the enterprise is centered. What used to be non-prod for IT will now be prod for the rest of the enterprise. To succeed, IT will need to deliver data quickly and securely to the rest of the organization or even across organizational boundaries to partners and other ecosystem players. Embracing collaborative processes and workflows that support this new pattern of enterprise operation will be critical. Enter DataOps—the unique mix of people, process and technology approaches required across teams in an organization to become agile with data.
Given the similarities outlined earlier of the data journey to the software journey, organizations’ DataOps journey can benefit from the lessons learned in their DevOps journey. For instance, developing a data competency will require new layers of organization and workflow patterns supported by corresponding technologies. Proactively planning and laying down infrastructure for DataOps will yield far better results rather than adopting technologies ad hoc until a data competency blueprint emerges through trial and error.
Building a Data Company Technology Stack
As with any complex shift, it makes sense to deconstruct data competency into a series of layered capabilities, each building on the one that came before, that together help develop a sophisticated competency. For example, when building their software competency, in order to apply software in sophisticated ways, (e.g., cloud native architectures and microservices), organizations needed to manage distributed software elements (e.g., scale, heal, monitor) which in turn relied on the ability to test and deploy incrementally, consistently and rapidly.
A similar approach can be applied to data. Building collaborative processes around appropriate technological tooling can yield layered component capabilities that collectively define a sophisticated data competency. While the layering was emergent in the software journey, CDOs have the opportunity to be deliberate rather than reactive in developing a layered approach as the diagram below illustrates.
At the foundation of a successful data competency lies the capability to deliver data anywhere in the organization and secure it to meet regulatory and compliance mandates. Being able to quickly secure and move any type of enterprise data across a hybrid cloud context is hard work, but also table stakes for basic data management hygiene. These are critical prerequisites and this foundation needs to be solid in order to support the layers above.
Next, there are various data management activities to master, such as the self-service access, governance, data catalogs, data cleansing, collaboration and versioning, etc. These define the next level of data competency and must be mastered before data can be applied in different ways, e.g. in machine learning, test data management, analytics, data science, etc.
A thoughtful, layered approach to building a data company requires building the right abstractions between layers and will avoid tool sprawl consisting of a mesh of ad hoc point solutions that inevitably require human glue to drive data workflows.
CDOs should evaluate their organization’s needs through such a structured lens to evaluate the organization’s needs. Each of the activities will bring with them ecosystems of vendors, engendering further evaluation of vendor capabilities. Critical questions to ask include (among many): Is the solution enterprise ready? Does it work with all your data sources across hybrid cloud? Will it evolve with your changing needs? How will it interoperate with other solutions? Can it be automated?
As CDOs think through these infrastructure investments to connect different parts of the enterprise, the similarity to laying down railroad tracks to connect a large country are unmistakable and perhaps instructive: the resilience and stability of lower layers enable flexibility, variety and innovation at the higher levels just like the simplicity and reliability of simple railroad tracks enables running electric, steam engines, luxury bogies and cargo bogies alike.
Toward a Data Company
The refrain of digital transformation has been that every company is a software company. But to complete the transformation, every company must also become a data company. Software development practices and tools have evolved significantly in the last decade. We’re in the early stages of a similar evolution in data practices and tools that will take us through the next decade. The symmetry and coupling between software and data is unmistakable and there is much to be learned from the software journey.
As CDOs, CIOs and CEOs plan the transformation of their enterprises in this new era, they will need to consider building a foundation for their data competency that is stable, resilient, compatible with their software development processes and adaptable to inevitable change. They should be prepared to deliver their data from where it is produced to where it is used throughout the enterprise.
The data company is an organization where data is treated not just as a byproduct of operations, but as the lifeblood of the organization—that drives software development, improves quality assurance, informs decision making and potentially generates revenue in its own right.
See the original article on DevOps.com.
*** This is a Security Bloggers Network syndicated blog from Resources - Blog authored by Delphix. Read the original post at: https://www.delphix.com/blog/becoming-data-company-back-to-future