Reprint from August 2014 – infocus.emc.com
Application Transformation is a core component of an IT organization’s transformation from a “technology provider” to a “business enabler”. At its core, this transition revolves around three core principles, namely value creation, transparency, and collaboration. At first glance, you may be asking “what about technology? This is IT after all, isn’t it.” To that I reply, in the new economy, technology is a catalyst for change and primarily a means to an end, and that end is to create value for the organization. Technology for the sake of technology is interesting but of limited value unless it is applied against the corporate strategies. Case-in-point, the iPhone/iPad. By itself the iPhone/iPad is nothing more than a “very expensive cutting board” (see German iPad Commercial). The true value of the iPhone/iPad is not the touch technology, ulta-light compute platform, or simplistic design rather it is the comprehensive ecosystem of services (iTunes, AppStore, and countless 3rd Party services) that deliver value to their customer and drive revenue through the front door.
The most successful IT organizations are those that can seamlessly weave technology into the business fabric to provide new and/or improved services to internal and external consumers in-line with market demands and technological advances. Take Netflix as an example, within the last 4 years Netflix has revolutionized how we watch video from DVD mail order subscription services to high-def, streaming video. This type of success can only be accomplished through synergistic goals rooted in deep collaboration between all stakeholders along the product development lifecycle, especially the business. In short, this means that developers, testers, release mangers, operations engineers, and product owners are partners in the development of new products and services. The ability to rapidly prototype, introduce new technologies, and test new ideas, like Netflix “Streaming-Only” service in 2010, is only possible when these cross-functional teams are working in concert toward a common goal, namely to create value by promoting stable changes into production.
Collaboration at this level requires trust and that trust is based on a series of transparent process and practices that equip decision makers with the critical data around opportunity size, opportunity cost, release capacity, and market readiness. It is this last concept that completes the transition from technology provider to business enabler. By being transparent with respect to cost, schedule, capacity, throughput, etc., IT levels the playing field in the boardroom and changes the conversation from “why can’t we do that?” to “what should we be doing to move the needle on strategy understanding our collective constraints?”; this is the transition from gatekeeper to enabler.
Narrowing the focus a bit, what does this specifically mean for IT and Applications in the future? What business drivers are incenting this need to change? And, what are the core future state requirements that must be considered when planning a transformation in the applications space?
The drivers behind application transform are not revolutionary or even new. However, the pace of technology change, the lower barrier of entry into the market, and the availability of cheap and/or free (open source) labor has greatly increased the urgency to transform. As expected, the common drivers behind transformation are as follows:
- Improve speed and agility so that the organization can create value more quickly and pivot in response to customer feedback, market demands, new technologies, etc.
- Improve cost benefit ratio of the application portfolio and development processes so that funding and time commonly associated with maintenance, enhancement, and support can be redirected to more strategic endeavors like mobile or big data.
- Improve quality by building it into the design and moving forward in the SDLC to minimize and/or contain risk associated with market trials, production releases, compliance, security, etc.
While there has been little change to the drivers of application transformation, advances in technology such as cloud computing, mobile devices, and platform automation to name a few have drastically changed how users expect to consume these services and as such how these new services are being built. Unlike previous decades where IT could dictate how consumers interacted with technology, IT consumer expectations today are now being set by external technology and service providers like Apple and Amazon; and, they are setting that bar very high. As mentioned above, IT has the opportunity ‘to either disrupt or be disrupted’. Without transforming how applications and services are built and consumed, IT will struggle to meet the fast, simple, and self-service bar by the new economy. So what does Application Delivery look like in the new IT shop?
In a nutshell, it looks like an App Store or Services Portal. The future state requirements for a modernized Application Delivery shops are as follows:
- Self-service catalog enable through a user portal. Automated, traceable service requests & fulfillment
- Business aligned services managed through service frameworks and accessible through externalized APIs.
- Standardized tool chains that allow for deep automation and workflow orchestration.
- Composite applications built from lightweight, event-driven, micro-service architectures run on hybrid cloud that can be assembled on-demand to address common business and customer usecases.
- Deliver services at the convenience levels of commercial service providers. Access to services through device of choice.
In the Part 2, I will cover the common transformation framework that lays out the key components of application transformation.
As mentioned in previous post, key business drivers behind application are; speed & agility, cost reduction, and quality/risk reduction. These drivers are complemented by industry trends and technology advance that require application transformation initiatives to consider solutions vectored around automation, integration, and self-service.
Starting with speed & agility, as this is a common starting point for senior leaders and executives in the application space, our first thought is often, ‘just code faster’. While advances in SDKs, frameworks, and libraries have minimized the amount the code that needs to written, it doesn’t change the fact that most developers can only type between 40-60 word per minute (or 200-300 characters per minute). Often this pressure to code faster results in poor quality software as developers are forced to take technical short cuts, like hard coding variables, or bypassing steps in the SDLC in order to go faster. So if the answer isn’t ‘code faster”. How can we accelerate delivery?
We start by asking, what does it mean to be done? If you look at the principles behind the agile manifesto, it is pretty clear. “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” Using this as the baseline, we start to look holistically at the process and tool chain that supports moving changes from a good idea to working software in production, or creating value. This is what we refer to as the Application Delivery Pipeline.
In many enterprises, this pipeline is constricted by inefficient, manual processes, disjointed tool chains, misaligned resources, and technical debt. As expected, these organizations struggle with throughput, agility, and quality; and, they typically have higher costs associated with development and support of applications. To address these challenges, or optimize flow through their pipeline, enterprises need to regulate the flow of new work into the pipeline by prioritizing work based on strategy and value, align people and tools to accelerate throughput by introducing automation and deconstructing silos, and become transparent in its adherence to process and measurement of success.
To address flow, EMC provides services that targets three key aspects of the application pipeline namely, lean/agile startup, devops, and delivery dashboards. These services help our customers implement optimized SDLC processes that are coupled with integrated, automated tool chains designed to remove common road blocks associated with provisioning infrastructure, deploying application and data, testing, portfolio management and implementing agile/scrum. Together, these services establish a delivery ecosystems, or factory, designed for speed without compromising on quality or reliability.
While optimized processes and tool chains are critical to speed and agility, there are other barriers that add cost and slow value creation within the enterprise, namely technical debt. All legacy portfolios have technical debt. Technical debt manifests itself in aging platforms and architectures, duplicate services, inconsistent coding practices, etc. In short, technical debt is a representation of all the ‘permanent, short-term fixes’ and deferred maintenance decisions made over time to a given application or portfolio of applications. The size of this problem is unique to every customer but on average for every 300,000 lines of code, there is $1,000,000 of technical debt. This includes COTS solutions as well as custom developed solution. This debt adds complexity and inconsistency to applications, increase the effort needed to support and maintain the application, and typically impedes reliability, extensibility, and scalability. In short, technical debt adds cost and reduces speed and agility.
That said, addressing technical debt in your portfolio should be done through the dual lens of business and technical value. In other words, it may not make sense to spend time and money modernizing or refactoring an application with a small user population and/or with little strategic value. This process of mapping value and determining the correct future state, or disposition, is often referred to as application rationalization. Through a rationalization effort, applications within a portfolio are identified and measured using a variety of metrics to determine the value and cost of modernizing. Using this data-driven approach, applications are then mapped to specific dispositions illustrating the desired future state of the application.
Basic disposition planning will put applications and workloads into five future state categories outline below:
|Retire||Suggests that either or both the business and IT value analysis represents a diminishing value proposition (performance, cost, support, maintenance) with regard to sustainability, maintenance, long-term functionality; additional investments is not warranted or recommended|
|Retain||Suggests reasonable investments warranted for maintaining performance and functionality, technology refresh, migrations, consolidations|
|Replace||Suggests that there remains a positive business and/or IT value proposition that warrants investment but technology is not a business differentiator; potential migration and consolidation to external solution provider (SaaS)|
|Replatform||Suggests that there remains a positive business and/or IT value proposition that warrants investment but aging technology will soon impact the value proposition; maintain and replatform technology baseline|
|Rewrite||Suggests reasonable investments warranted for code updates/enhancements, potential migrations and consolidations (platform-specific), feature/function additions|
Using a repeatable, incremental approach, EMC will systematically work through applications and/or application groups and create a modernization roadmap that reflects the various dispositions. For example, if an application is scheduled to be retired, we would look at data retention requirements, remediation work associated with the existing integration points, and ultimately design a decommissioning plan for the infrastructure. For Replatform, we may look at reducing cost by migrating the application to a cloud provider, like VMware vCHS. The focus of these efforts is to drive cost out and add value as quickly as possible to fund future modernization phases.
In addition to disposition, another critical factor in the planning and modernization effort is the source platform. For example, strategies and reference architectures that support a mainframe migration to x86 are significantly different than those that support transforming a monolith, client-server application to a micro-service, cloud architecture. Understanding the constraints of each platform and developing a series of reference architectures and patterns is critical for defining and developing a consistent, repeatable process for modernizing the portfolio.
As applications and workloads are being modernized and new services are being built to support provisioning and deployment, IT needs to also consider how these assets will be consumed. The last element of the EMC Application Transformation Framework focuses on the consumerization of the apps and services. In this layer, applications and services are provided via unified service portals and/or app stores to the IT consumer, either internal or external. Using metering technologies coupled with approval flows and rules, services are brokered via automated distribution channels.
As you can see, modernizing application delivery is large undertaking that impacts all aspects of the organization along the delivery pipeline. That said, companies that have embraced lean process, modern development practices, and devops are experience benefits such as:
- Deploy code 30x more frequently and reduced time-to-market for software delivery
- Cut deployment costs by 50% through lifecycle automation, standardization and developer self-service.
- Improved the way that people, processes and technology work together.
- Improved software quality through the use of “fail forward” processes and automated testing.
- Achieve higher utilization & flexibility of existing infrastructure by delivering resources on demand
- Improved resiliency and stability of enterprise platforms by 50% and reduce risk within your enterprise
- 12x faster mean time to recovery (MTTR) in the event of an outage/event
In subsequent blog posting, we will delve into greater detail of each element along the transform path and explore new technologies and practices to help transform IT.