I am getting solid productivity increases in my microservice development process by using Azure Application Insights (AAI) to monitor the performance and health of services running in the cloud or on-premises. Not only can I quickly put AAI to productive use in the code-test-debug cycle to see where performance bottlenecks are. I can also use it when my services are in normal operation to monitor their health daily (and send me email alerts), and over weeks and months via graphs and charts. These capabilities are easily available, and require very little code be written. Indeed, many health indicators have their telemetry data automatically generated by the AAI dlls added to a service project.
Microsoft describes Applications Insights as follows — “Visual Studio Application Insights is an extensible analytics service that monitors your live“ service or “web application. With it you can detect and diagnose performance issues, and understand what users actually do with your app. It’s designed for developers, to help you continuously improve performance and usability. It works for apps on a wide variety of platforms including .NET, Node.js and J2EE, hosted on-premises or in the cloud”, from Application Insights – introduction.
In summary, Azure Application Insights is a software developer/dev-ops Business Intelligence (BI) package. Similar to BI in other realms, AAI allows one to easily and quickly compose and visualize charts and graphs of key indicators of the “business” of software development and operations, and also drill down into the minute details with other charts and lists of detailed data. I am really impressed with how quickly one can come up to speed and productively use it.
AAI has charts, searches, and analyses available both in Visual Studio and the Azure Portal. When you need health alerts sent by email, long term charts and graphs, and an easy to use query language to search through your health telemetry data, use the Azure Portal. Visual Studio’s Application Insights capabilities provide good performance and usage oriented charts (with drill down capabilities) and searches available during debug test runs without leaving Visual Studio.
The following are examples of some of the basic Visual Studio AAI displays involving performance analysis that can be had without very much work on your part writing the code to generate the telemetry data and/or to display it in a useful way.
Below, Figure 1 is an example of an AAI chart I’ve found highly useful in pinpointing the source of performance bottle necks. This chart is available via the Visual Studio Application Insights toolbar by clicking on the “Explore Telemetry Trends” menu item which displays an empty chart. You then must click the “Analyze Telemetry” button to generate the display. Note how you can set up the chart to display various “Telemetry Types” and “Time Ranges”, etc.
If you double click on one of the blue dots in Figure 1, you’ll start a “drill down” operation that will open up a “Search” display shown below in Figure 2. This display lists all the individual measurements that have been aggregated into the dot you double clicked on. And in a pane to the right (not shown) it lists the minute details of the item your cursor is on. Also note that you can use check boxes to the left and above to further refine your search. Figure 2 below shows the drill down display you get from double clicking on the 1sec – 3sec small blue dot at the Event Time of 4:48 in Figure 1.
The displays in Figure 1 and 2 show the aggregation and breakdown of the elapsed time it takes for a single WCF service to complete about 100 dequeue operations from an Azure Service Bus Queue using the NetMessagingBinding in ReceiveAndDelete mode. After the service dequeues a single item, it checks to see if the item is valid, and then saves it in Azure Table Storage. You can get a link to the service code from this blog article SO Apps 2, WcfNQueueSMEx2 – A System of Collaborating Microservices. This code does not have the telemetry generating code present.
Therefore, from the point of view of Application Insights there are a couple relevant things to measure in this service:
- The total elapsed time of the “request”, from the start of the service operation until it executes its return statement. This data is generated by a few lines of telemetry code that I had to write. The telemetry code uses the TelemetryClient.Context.Operation, TelemetryClient.TrackRequest(), and TelemetryClient.Flush() provided by the AAI dlls added to the service project. These are described in Application Insights API for custom events and metrics in the “Track Request” section. The telemetry code also uses the System.Diagnostics.StopWatch to record total elapsed time of a service operation.
- The elapsed time it takes for each of the 2 “dependencies” (aka external service calls) to execute. The external dependencies are the Azure Service Bus and Azure Table Storage. Specifically one dependency is the Service Bus Dequeue operation. The other dependency is the Table Storage Save operation. In both cases the dependency elapsed time is automatically measured by the Application Insights dlls, and this data is automatically sent as telemetry as well. I did not have to write any code to support dependency analysis. All the work is done by the 5 or 6 Application Insights dlls that are added to a service project via NuGet. This “automatic telemetry” may or may not require .NET 4.6.1. Many of the “automatic” performance monitoring features require that the “Target Framework” of a service’s project be set to .NET 4.6.1. You can use lower versions as well, but may not get so many automatic measurements. Note that many .NET Performance Counters are automatically generated and sent out as telemetry as well.
Figure 1 measures the first item, the total elapsed time of the request, from start to finish including the elapsed time of any dependencies. Figure 1 shows 2 performance test runs – One at Event Time of 4:23 and the other at Event Time of 4:48. It is obvious that the run at 4:48 (at the right of the chart) had the vast majority of the service requests complete in <= 250 milliseconds. That is fast!
In the 4:23 run (at the left of the chart) the majority of the service requests took between 500 milliseconds and 1 second to complete. That is much longer. Why? The 4:23 run (at the left) had the WCF service running on my development system, while the 4:48 run (at the right) had the service running in an Azure WorkerRole. It is not surprising to see much faster elapsed times in the cloud since the overall network latency is much, much less there when the service does Service Bus and Table Storage operations. Plus, there is more CPU power available to the Azure based service since the WorkerRole host did not also have to run the test client as well. Both runs had the test client running on my desktop development system in my office, using a single thread enqueuing 100 items one after another.
Being able to quickly separate the execution time of the service code from the code it depends upon is key to rapidly pin pointing the source of performance problems. From Figure 2 ‘s drill down display you get from double clicking on the 1sec – 3sec small blue dot at the Event Time of 4:48, you can clearly see where the slowness in these two independent dequeue and save operations came from – One was entirely due to the slowness of the service code, while another was largely due to the slowness of the Service Bus during that service operation.
Figure 3 below shows the drill down display you get from double clicking on the 3sec – 7sec small blue dot at the Event Time of 4:48. Note that the source of slowness this time is NOT due to the Service Bus nor Table Storage dependencies, but rather solely due to the service code itself. Perhaps there was some thread or resource contention going on between service instances here that deserves further investigation. AAI has the capability to aid in pinpointing these sort of things as well, but is not covered here.
The above displays (and more) are available in Visual Studio. And you get even more displays and capabilities in the Azure Portal via the Application Insights resource that collects and analyzes the telemetry sent from services and clients.
Please see the following links for more info on AAI:
WCF Monitoring with Application Insights — With the dll that comes with this you do not have to write the request tracking code yourself. It takes care of that for you, providing code-less performance telemetry data.
I hope this introduction to Application Insights spurs you to further investigate it’s capabilities and how it can be useful to you.
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
Back in the 20th century, before the broad use of Service Oriented Apps, many software systems depended upon distributed transactions using the 2-phase commit to ensure data was properly obtained from, and saved into, databases. Many software systems did not use messaging back then. Nowadays Service Oriented Apps often use messaging, plus also steer away from using distributed transactions since the various databases used are spread far and wide — from the data center to various clouds. Widely distributed data makes the resource locking required for distributed transactions problematic in various ways, plus distributed transactions tend to produce high latency times.
Learning how to design and develop Service Oriented systems using messaging and avoiding distributed transactions requires some new perspectives, knowledge, and skills. One part of the new knowledge required is how to create idempotent designs that facilitate “at least once” delivery messages. This kind of delivery is common with most messaging technologies today. “At least once” means that a given message may be delivered once, or twice, or more often. The software must be able to effectively deal with multiple deliveries, and the duplicate data within the multiply delivered messages. And this requires idempotency.
Here are a few articles I found useful concerning idempotency and related issues:
Messaging: At-least-once-delivery, by Jonathan Oliver, April 2010.
Idempotency Patterns by Jonathan Oliver, April 2010. This is a really useful article and widely cited.
Ditching 2-phased commits, by Jimmy Bogard, May 2013. A good overview of problems with 2-phased commits and alternatives.
(Un)reliability in Messaging: idempotency and deduplication by Jimmy Bogard, June 2013. This shows a couple useful techniques with code snippets.
Life Beyond Distributed Transactions, Pat Helland, 2007? Pat worked at Amazon.com and it is interesting to read about his perspective at this time.
I hope you find this information as useful as I did!
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
Building highly available, scalable, and resilient software running in the cloud is quite different from building such software systems that run on-premises. Why? In the cloud you must plan for your software encountering a much higher rate of failures than usually encountered in on-premises systems. This article provides links that describe techniques and best practices for building your cloud software to effectively deal with such frequent failures.
Here is a rough sketch of the sources of these failures:
- Cloud hardware failures – The cloud uses vast numbers of cheap, commodity compute, storage, and network hardware units to host both the cloud provider’s PaaS services and customer services and apps. This cheap hardware fails more frequently than on-premises systems which generally utilize expensive, top-of-line compute, storage, and network hardware. On-premises hardware systems are designed to achieve a high Mean-Time-Between-Failure (MTBF) so that software running on them does not have to deal with a high rate of hardware failures. The cloud is the opposite, having a low hardware MTBF due to a much higher rate of failure of its cheap hardware. These routine hardware failures are very common and can happen multiple times a day to a single cloud service. The cloud control software (known as the “fabric”) is programmed to recover the software affected by hardware failures, both customer software and cloud provider service software. The “fabric” recovery happens in the background, out of sight. During the recovery process from these routine hardware failures the cloud provider’s services return a “not available” signal to customer software using the service. The duration of such “not available” failures is typically measured in seconds, perhaps minutes, rarely longer. This requires that customer software running in the cloud be designed to 1) gracefully handle the higher rate of routine, short term failures of both hardware and the cloud provider services it uses, plus 2) also to have a low Mean-Time-To-Recovery from non-routine failures as well. The much higher rate of such routine failures is the big difference between cloud and on-premises software. Note that the cost savings of using cheap, commodity hardware by cloud providers are passed on to customers.
- Cloud hardware overloading – Many cloud provider services are multitenant (software-as-a-service), i.e. they share blocks of hardware (nodes) between multiple customers utilizing a cloud provider service. For example Azure SQL is a multitenant cloud provider service that is used by multiple customer services and apps. A multitenant cloud provider service shares hardware amongst customers to reduce costs, with the savings passed on to the customer. When some customer’s software becomes very heavily loaded it may use too many resources provided by a particular cloud provider service sharing compute, storage, or network nodes. In this heavily loaded situation the cloud provider service itself and/or the “fabric” control software will start throttling the cloud provider service to protect it and its hardware from becoming fatally overloaded and crashing. Such throttling appears to the customer’s software as if the cloud provider service is temporarily unavailable. In other words, it appears as if the cloud provider service has failed for some reason since it will be unresponsive for a few seconds or minutes until the throttling stops. This intermittent protective throttling affects all customer software utilizing that cloud provider service in this way. Throttling is a very common occurrence, happening as much as several times per hour, or more during heavy usage periods, with a typical duration of seconds per occurrence, but occasionally longer. Customer software must be written so it is able to effectively deal with such throttling to remain resilient and available. Note that some cloud providers have non-shared (single tenant) PaaS services available for a premium price. Use of such premium services will side step throttling issues, other than the throttle you should build within your own customer developed services to avoid hard crashes due to overloading.
- Cloud catastrophic failures – Compared to the above failures, catastrophic failures are very rare. They occur perhaps a few times per year and typically involve the loss of one or more cloud provider services for use by customers for a half hour, several hours, or for a day or so in extreme cases. Such failures are caused by 1) Physical disasters, like earthquakes or terrorism, affecting data centers or their network infrastructure, 2) Massive hardware failures, 3) Massive software failures or bugs, or 4) Operational failures, i.e. the cloud provider operations staff making a big mistake or a series of smaller mistakes which cascade into a big outage. Mission critical customer services and apps must be designed to withstand these longer duration failures as well as the above shorter duration failures. One way to achieve such “high availability” is for customer software to “failover” to another data center located in a different geographical area. Note that this situation is quite similar to what can happen in an on-premises data center, and is also addressed by the links that follow.
The routine short term failures described above are known as Transient Faults in Azure. Please see the below item called “General Retry Guidance” in the ” Azure Cloud Application Design and Implementation Guidance” link for a full description of how Transient Faults happen and best practices to deal with them.
The good news in the area of failure is that the cloud “fabric” control software is very intelligent and will usually be able to automatically heal cloud hardware failures and hardware overloading failures. For these, the healing process may take a few seconds, or a minute, or generally some time that is within the Service Level Agreement (SLA) for a particular cloud service like Azure SQL or Azure Storage. A Service Level Agreement is a legal agreement between customers and a cloud provider that gives a cloud provider a financial incentive to provide a stated level of service to customers. Each cloud service usually has its own unique SLA. Typically, if the cloud provider is not able to fulfill the terms of the SLA for a particular cloud service, it will refund the customer’s payments for the services used to some stated extent. Below shows typical levels of service one can expect from an Azure SLA, usually measured on a monthly basis in terms of minutes of availability per month.
So, how much failure time per month can one expect from different SLAs?
- An SLA of “three 9s” (a cloud service is available 99.9% of the minutes in a month) results in a maximum unavailability time of 43.2 minutes per month, or 10.1 minutes per week.
- An SLA of “four 9s” (a cloud service is available 99.99% of the minutes in a month) results in a maximum unavailability time of 4.32 minutes per month, or 1.01 minutes per week.
- Many cloud services have a 99.9% availability. Some are a little higher, some a little lower.
- For more on Azure SLA’s please see the “Characteristics of Resilient Cloud Applications – Availability” section of the below link to “Disaster Recovery and High Availability for Azure Applications”.
- With 10.1 minutes of unavailability per week as typical, and appearing to customer software running in the cloud as if a cloud provider service has failed, you absolutely must build your cloud software to effectively deal with frequent failures of all kinds. Failure is a normal part of cloud computing. It is not exceptional at all.
- Plus, for mission critical services and apps running in the cloud you must also build them for high availability so that they can gracefully withstand a catastrophic failure as well, and very rapidly come back on line. Perhaps be back on line in seconds to minutes.
The info sources presented below describe specific techniques to deal with such failures.
Azure Cloud Application Design and Implementation Guidance by Microsoft Patterns and Practices — Over the past year Microsoft has pulled together its key Azure best practices into one place. This makes it so much easier to draw upon when building software to run in Azure. The Guidance contains links to 13 focused areas. In my opinion the “must reads” in the above list are as follows. They are required to gain a minimal effective understanding of what it takes to build “Highly Available, Scalable, Resilient Azure Services and Apps”.
- Retry General Guidance (this has more detail of why there are lots more failures in the cloud)
- Availability Check List
- Scalability Check List
- Monitoring and Diagnostics Guidance
- Background Job Guidance
Disaster Recovery and High Availability for Azure Applications – This Microsoft document covers strategies and design patterns for implementing high availability across geographic regions to cope with catastrophic failures. These patterns allow an Azure app or service to remain available even if an entire data center hosting the app or service ceases to function. They also aid in reducing the Mean-Time-To-Recovery for your cloud hosted software.
Hardening Azure Applications – A book by Suren Machiraju and Suraj Gaurav published by APress in 2015. It does a great job of identifying techniques to build “Highly Available, Scalable, Resilient Azure Services and Apps”, as well as including security, latency, throughput, disaster recovery, instrumentation and monitoring, and the “economics of 9s” in SLAs. It is invaluable in defining requirements and dealing with the business in these areas. The target audience is Architects and CIOs, but Senior Developers and Technical Leads will also benefit from it. We all have a steep cloud learning curve to climb in the area of understanding and defining an organization’s non-functional requirements for cloud services and apps, plus the techniques required to meet those requirements. This book speeds one on their way.
Cloud Design Patterns: Prescriptive Architecture Guidance for Cloud Applications – An online and paper back book by Microsoft Patterns and Practices, published in 2014. This provides excellent primers in key cloud topics like Data Consistency, Asynchronous Messaging, plus an excellent section with in depth explanations of a number of “Problem Areas in the Cloud”. So if you are unsure of terminology or technology terms, this is a good place to learn the basics.
Finally, a new way to aid building “Highly Available, Scalable, Resilient Azure Services and Apps” has just become available in Azure. It is called Service Fabric. I will cover that in future blogs.
dotnetsilverlightprism blog by George Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Based on a work at dotnetsilverlightprism.wordpress.com.
I’ve been able to get quite fast development time using Azure Stream Analytics (ASA) to analyze streams of unstructured data, plus to transform the format of such data, i.e. breaking up the data into different streams and/or reconstituting the data into different structures and streams. These are things we often need to do, and now we do not have to always write programs to do it. In some cases we can use ASA instead.
The learning curve is quite manageable for ASA. I found the longest part of the learning curve was working with ASA’s SQL like query language, particularly learning how to use its ability to do real time analysis of data streams via the Tumbling, Hopping, and Sliding time windows it offers. But if you know the basics of SQL this only takes an hour or so to learn, with good examples at hand (in the links below). I hope the links to ASA info sources will shorten your learning curve as much as they shortened mine, plus open your eyes to the possibilities ASA offers — It is a powerful, yet easy to use tool.
Here is a basic introductory example showing the process of building an ASA job and its query in the Azure Portal – “Get started with Azure Stream Analytics to process data from IoT devices“ by Jeff Stokes of Microsoft. The screen shots of the Azure Portal for ASA in this link will give you an understanding of how to work with ASA and its query language. Note that you need not write external code to get things working. All your work, including writing and debugging the query, is done in the Azure Portal UI. Note that you may need to write some C# code later for production monitoring of the ASA job and any Event Hubs it gets data from.
At the time of writing this blog article, ASA can input and output data from the following Azure services:
ASA Input Sources
Reference Data in a Blob
ASA Output Destinations
Service Bus Queue or Topic
These inputs and outputs provide an amazing array of options for processing data at rest (residing in a Blob) or data in motion (streaming into an Event Hub or IoT hub).
Here are 2 common usage scenarios of ASA:
- Searching for patterns in log files or data streams
- This can include using ASA to analyze log files that are programmatically created by ones software to look for errors and warnings of certain kinds, or for telltale evidence of security problems. “SQL Server intrusion detection using Azure Event Hub, Azure Stream Analytics and PowerBI” by Francesco Cogno of Microsoft is an example of such a usage scenario.
- Since ASA works on live data streams contained in Azure Event Hubs it can be used to search for patterns in telemetry data from the outside world, e.g. IoT systems. For example one could find each item in the input stream that had “Alert” in the field named “EventType” and place that record into a Service Bus Queue read by a Worker Role whose job it was to push alert messages to a UI.
- Calculating real time statistics on-the-fly
- An example is calculating moving averages, standard deviations, and being able to create alert records sent to an Alerts queue when such a calculation exceeds some preset level. “Using Stream Analytics with Event Hubs” by Kirk Evans of Microsoft presents an example of this usage scenario as does the first link, above.
Other Useful Info Sources
“How to debug your ASA job, step by step” by Venkat Chilakala of Microsoft. This can save lots of time when debugging.
“Query examples for common Stream Analytics usage patterns” by Jeff Stokes of Microsoft. For both simple and complex query techniques by example.
“Scale Azure Stream Analytics jobs to increase stream data processing throughput” by Jeff Stokes of Microsoft. This will give you in depth knowledge of ASA.
“Stream Analytics & Power BI: A real-time analytics dashboard for streaming data” by Jeff Stokes of Microsoft. How to quickly display charts from data output by ASA.
“Azure Stream Analytics Forum” on MSDN. I have found this forum to contain some really useful posts. Plus you can ask questions as well.
I hope you find these info sources as useful as I did in opening up a new world of cloud-based data analysis and transformation!
One of my current technology explorations is polyglot persistence. I am now mainly through the reading stage and it is quite clear that No SQL databases can be quite useful in certain situations, as can relational databases. Using both No SQL and relational databases together in the same solution, each according to its strengths, is the essence of the polyglot persistence idea.
Here are some sources of information I’ve found to be most useful on No SQL databases, their strengths, weaknesses, and when and how they can be best used:
- Martin Fowler’s book NoSQL Distilled (2013) has been immensely helpful in gaining an understanding of both the various DBs, their strengths and weaknesses, and key underlying issues like eventual consistency, sharding, replication, data models, versioning, etc. It is short little book that is truly distilled. If you read only one thing, this should be it.
- Also very useful is Data Access for Highly-Scalable Solutions (2013) from Microsoft Press and the Patterns and Practices group. It is written with a cloud mindset, contains code examples, and goes into much more detail that Fowler’s book. Importantly, it shows examples of how to design for No SQL DBs. I found the first few pages of its Chapter 8 “Building a Polyglot Solution” to be an excellent summary of the strengths, weaknesses, and issues one must deal with in using a No SQL database. That chapter also presents an excellent succinct summary of general guidelines of when to use a Key-Value DB, a Document DB, a Column-Family DB, and a Graph DB on page 194 of the book.
- The blog article I posted several months ago, CQRS Info Sources, contains links to good articles on techniques that themselves use No SQL persistence (sometimes by implication). Reading these links aided me in seeing areas where NoSQL DBs could be useful.
- Microsoft Press’s book Cloud Design Patterns contains a lot of useful information on patterns that can use NoSQL DBs; guidance on things like Data Partitioning, Data Replication; plus a primer on Data Consistency that promotes a good understanding of eventual consistency versus strong consistency (usually available with a relational DBs via transactions). Some of the patterns it describes that can be implemented with a NoSQL DB are Event Sourcing, CQRS, Sharding, and the Materialized View.
Finally, keep in mind that both books listed above advise that relational databases will typically be the best choice for the majority of database needs in a system, and to use No SQL DBs only when there are strong reasons to do so. The costs of not using a relational DB, with its capability to automatically roll back transactions spanning multiple tables, can be quite substantial due to the complexity of programming the error compensation (rollbacks) by hand.
In just a single year a major change has happened in the in the multiple waves of technology change that have been washing over the computer and software industries for the last 7 years or so — The Cloud Wave is growing in size at a rate much faster than any of the other waves of change I described a year ago in my blog article “Waves of Technology Change: Grab Your Surfboard or Life Jacket?”
Job Trend Data
Last year I identified the following 4 waves of change from the information in Indeed’s Leading Tech Job Trends, based on the top 10 “fast growing tech key words found in online job postings” (quote is from Indeed Job Trends page):
- New Web Wave – HTML5 and jQuery
- Mobile Wave
- Big Data Wave
- Cloud Wave
The above waves were identified from data shown by Indeed on 2/9/2015. Please see my 2015 blog article (same as that above) for the data this categorization was based on.
During the past year Indeed has modified its Job Trends page. Now (February 2016) it displays only the top 5 tech job trends, rather than the top 10 as in 2015. Below is a comparison of the top 10 Leading Tech Job Trends of 2015 versus the top 5 of 2016, both listed in rank order of how fast the key word is growing in online job postings.
- HTML5 Data Scientist
- Mongo DB Devops
- iOS Puppet
- Android PaaS
- Mobile app Hadoop
- Social Media
In the above 2016 data we have the following classification, mapping job key words to waves of technology change:
- Big Data Wave – Data Scientist (new in 2016) and Hadoop.
- Cloud Wave – Devops (new in 2016), Puppet, and PaaS.
Conclusion — The major change from 2015 is that the Cloud and Big Data waves have taken over the top 5 fastest growing jobs, completely displacing the Mobile and the New Web waves! And, since Big Data is heavily Cloud based these days, you can also say that the overall Cloud wave is the fastest growing wave of technology change washing over us right now.
Survey Research Data
The RightScale “2016 State of the Cloud Report” adds deeper insight to this conclusion. It is a survey of 1,060 technology professionals (executives, managers, and practitioners) from a large cross section of organizations concerning their adoption of cloud technology. I encourage you to examine the details in the report itself. Below are some key findings from this report:
- The use of “Any Cloud” increased from 93% to 95%. Note that all the data includes experimental projects as well as production systems. Wow, almost all respondents are using the cloud somehow!
- In the last year respondent’s use of “Hybrid Clouds” increased from 58% to 71%.
- Respondents typically use more than one cloud provider, both public and private.
- “Lack of resources/expertise” has replaced security as the top “cloud challenge” since 2015. Concern about security is now the number two “cloud challenge”.
- Percent of respondents running apps in the cloud in 2015 versus 2016 are shown below by cloud provider:
2015 2016 Year-to-year Change
- AWS 57% 57% 0%
- Azure IaaS 12% 17% +5%
- Azure PaaS 9% 13% +4%
- VMWare, Google, IBM, etc. were all between 4% and 8% in each year.
The above clearly shows that Microsoft’s Azure (with 4% to 5% growth) is taking market share from AWS (with 0% growth). By the way, grabbing market share from competitors is one key characteristic of a market leader.
What to Do? Grab a Cloud and Get Up to Speed
If you are a software development professional (whether executive, manager, architect, or developer) it should be clear that there is a high probability you will be called upon to participate in cloud based projects in the next few years.
My own cloud learning journey has thus far resulted in me learning how to architect and develop industrial strength cloud services and hybrid systems (using both cloud and on-premises systems) using the Azure Service Bus, Azure Storage, and Azure Cloud Services. After a number of months of full and part time study and development, I became proficient enough to successfully use these skills in my job in July 2015. It has required a substantial amount of time and effort to learn the basic skills, develop the vital “cloud mindset”, and integrate these together.
Developing cloud based software requires a very different mindset than developing software for on-premises systems. A “cloud mindset” is required – One has to specifically design for failure and eventual consistency, plus other incongruities as well. This has much farther reaching implications than you might first imagine. Some of the things one routinely practices in on-premises system development are anti-patterns and anti-practices the cloud! So not only do you have to learn new things to do high quality cloud software development, you also have to unlearn things you already know that do not work well in the cloud.
Below are a few information sources I’ve found most valuable on my cloud learning journey. They will help you on your learning journey should you choose Azure.
- “Microsoft Azure — The Big Picture”, by Tony Meleg, MSDN Magazine, October 2015. This article provides an excellent overview of what Azure has to offer from a software developer’s point of view.
- Exam Ref 70-532 Developing Microsoft Azure Solutions, March 2015, by Zoiner Tejada, Michele Leroux Bustamante, and Ike Ellis. At first I found the breadth of information required to develop software on Azure overwhelming. This book solved that problem, bringing it all together in one place so you do not have to spend hours sifting through online documentation and tutorials (save the excellent tutorials for after you’ve read the book). This book provides you with all the basic details needed to start developing software for Azure. It has a wide breadth that covers all the key features of Azure you’ll have to deal with. Plus it goes into a reasonable depth with code examples, and has good references to more in depth sources. It helps you learn to use Power Shell. And you don’t have to study for the certification exam and take if it you don’t want to! You can use it solely as a source book and learning guide.
- Cloud Design Patterns: Prescriptive Architecture Guidance for Cloud Applications by Homer, Sharp, Brader, et al. Copyright 2014, Microsoft Patterns and Practices. This is available in paperback (for a fee), or as a PDF (free download), or as a set of web pages. It contains 24 patterns, plus 10 guidance topics. There are also code snippets and samples provided as separate downloads. This book has been extremely helpful in showing me the bigger picture and the “cloud mindset” that one must absolutely learn to work with the cloud – like considering eventual consistency, designing for failure, scaling, replication, partitioning, etc. And it provides explicit guidance on how to effectively deal with these areas as well.
- Since about 2012 MSDN Magazine has published quite a number of well written articles on specific Azure software development technologies, most including code examples. Google “Azure MSDN Magazine” for a list of these articles. Of special interest are the articles published between 2014 and 2016, during the release of an astounding number of innovative and powerful new Azure capabilities that are also very well integrated with Microsoft’s software development tools like Visual Studio. Integration of Visual Studio with Azure capabilities measurably reduces development time and costs. These capabilities and tools, along with competitive pricing, are making Microsoft’s Azure cloud a clear market leader.
Good luck on your cloud learning journey.
Given the challenges of developing apps for modern distributed systems outlined in my previous blog SO Apps 4, Coping with the Rapid Rise of Distributed Systems, exactly what techniques can be used to decrease Time-To-Market (TTM) of these systems and apps? Below, I list specific techniques I have found that will speed your TTM, both in the development of the initial release and subsequent releases. Many of these are root causes of slow TTM.
- Use Volatility based decomposition as a basis for designing your software architecture – The architectural decomposition of a system into microservices and their components needs to be driven by the goal of encapsulating the most volatile areas of the system into separate parts. This decouples high volatility areas from each other so they can vary independently. A volatile area refers to some aspect of the domain, system, or whatever that has a high probability of change at some point in the life of the system, and that such a change would be so great that it will cause severe disruptions to the system architecture if it were not designed to encapsulate these volatile areas. Code changes are typically caused by changing requirements or fixing bugs. Encapsulating volatile areas prevents such code changes from rippling through large swaths of the code base. When such code changes are well contained within a microservice and/or its components much, much less work needs to be done to make the change and a faster TTM results in both initial and follow on development phases.
- Control the expansion of complexity – Tightly constrain the number of interconnections to prevent the non-linear acceleration of complexity from soon burying the project in excess code, slowing TTM more and more over time. See Figure 2 in my SO Apps 4, Coping with the Rapid Rise of Distributed Systems article for a diagram and explanation of this accelerating non-linear effect. Controlling this complexity is readily achieved by constraining interconnections as follows:
- Limit and manage the number of interconnections between components within a service or microservice. A closed layered architecture works very well for this.
- Limit and manage the number of interconnections between services or microservices themselves.
- Avoid nano services (very, very tiny fine grained services) which will inevitably result in more interconnections, creating more non-linearly expanding complexity.
- Avoid fine grained service contracts that require a lot of service operations since this inevitably creates more interconnections, and thus more non-linearly expanding complexity. Instead favor service contracts with fewer coarse grained service operations and “chunky” data contracts.
- Focus your business logic in your services, not in your UI or split between the UI and services — With multiple UIs (web, mobile, etc) why set yourself up to have to make code changes in multiple places due to the business logic being sprinkled around? Rather put all business logic in services as described above, leaving the UI to implement the presentation logic. You’ll get a shorter TTM this way.
- Strongly separate system concerns from business concerns – In all your code keep most of the developers focused on consistently adding the highest value by writing code that directly implements business logic, rather than having to write plumbing code that deals with system concerns while they are also writing business logic code. System concerns, implemented by plumbing code, are required for messaging, pushing data to clients, logging, auditing, etc. Push the plumbing code implementing system concerns down into utilities and infrastructure modules that the business logic developers can call. Having most of the developers spending significant time repeatedly writing plumbing code that can be done by a framework, base classes, or utility services will greatly slow your TTM. It is worth having this sort of system code developed very early in a project by a few highly skilled developers. That investment will quickly pay off in a faster TTM throughout the remainder of development.
- Do the above 4 things at once and you have an Agile Codebase, one that supports increased Business Agility – An Agile Codebase will support future changes being done with a much lower TTM than is the current practice. Why? Because 4 of the root causes of poor TTM have largely been eliminated. The resulting Business Agility allows a business to adapt to changes of all sorts much more quickly, be the change an opportunity, a threat, or from rapidly changing technology.
- Determine the Critical Path of the project (the sequence of development activities that adds up to the longest duration in the network of the dependencies of development activities) since it determines the soonest time a project will be done, i.e. the minimum TTM. Given the work that must be done and the sequence in which it needs to be done, without knowing the longest path how can one schedule work and even hope of achieving the shortest possible TTM? The Critical Path will affect your project whether or not you choose to use it to your advantage. This is key to creating realistic expectations in all project stakeholders, and hence credibility.
- Put the very best developers on the Critical Path activities – Your best developers have the highest probability of getting these TTM determining activities done sooner.
- Test early and test often – Service oriented distributed systems require intense, repeated integration testing since bugs are much more difficult to detect and fix than in monolithic apps. Integration tests need to be written and run for each individual module, and also for each subsequent integration of tested modules into larger components or microservices. Do not delay integration testing to the latter part of the project when there is insufficient time to find and fix bugs. That is a sure way to increase TTM, and decrease quality.
Tools and Technologies
- Favor pre-integrated sets of development tools and frameworks – It typically takes significant developer time to integrate a bunch of disparate tools and frameworks. And when a new release of them comes out with bug fixes it often takes additional developer time to integrate the new release into existing tools, frameworks, and code. All this acts to slow TTM. Much of this work can be avoided by choosing pre-integrated tools and frameworks.
- Avoid using new “preview release” technologies just out of the box – While definitely interesting and alluring, preview release versions of new technologies and frameworks tend to be incomplete, have holes, have more bugs than usual, require many work arounds, have insufficient and sketchy documentation, require that developers spend significant time learning their basics and even more time to learn the best practices. Adopt brand new technologies in solid production releases, subsequent to the preview releases and speed up TTM as a result.
Classic Mistakes of Software Development
- Avoiding making any of the “Classic Mistakes” will definitely result in a faster TTM – To be forewarned is to be forearmed. How many of the “Classic Mistakes” are going to happen on your next project? They will slow your TTM and they are avoidable! Below are some links I’ve found helpful in this area:
- In 1996 Steve McConnell listed software development’s classis mistakes in his book Rapid Development. In 2008 his company, Construx, conducted a survey of 500 developers who rated the severity of the mistakes and then published a white paper listing the mistakes and summarizing the survey results. This white paper is definitely worth reading. You’ll have to register and login to be able to download a copy.
- Peter Kretzman’s blog “CTO/CIO Perspectives: Intensely practical tips on information technology management” has a relevant article that looks at the role of senior management in this area: Software development’s classic mistakes and the role of the CTO/CIO
- Jim Bird’s blog article Classic Mistakes in Software Development and Maintenance presents some of McConnell’s material, plus additional useful material from Capers Jones.
Near the end of this article please note that real world research from Capers Jones shows that “Not identifying and cleaning up error-prone code – the 20% of code that contains 80% of bugs” results in a 50% decrease in developer productivity. Here is an excellent way to decrease TTM.
I hope this list aids you in decreasing the TTM in developing your software as much as it has helped me.