Software is code. All forms of information technology intelligence are created by, built from and constructed using computer programming languages in the form of code. It therefore stands to reason that we can say: software intelligence is of code, by code and in code.
But pedantic nomenclature fastidiousness notwithstanding, this is not quite the same as saying that a particular ‘solution’ is something that delivers software intelligence -as-code?
When we talk about software intelligence -as-code, we are more directly talking about the development, testing, delivery and management of IT system intelligence as codified logic in lines of software code with associated dependencies, libraries, classes and other functions (such as API connections etc.) in a form that dovetails with the syntax, policy structure and deeper architecture of an inherently composable everything-as-code approach to software application development.
Or in other words, we’re software-izing software in a sense i.e. making it inherently ‘soft’ and interchangeable, portable, manageable and more easily able to be integrated where and when we need it.
But why the grass roots definition?
Because we are seeing more organizations that make up the cloud cognoscenti talk about intelligence -as-code in the now much more composable, compartmentalized and containerized world of distributed systems development. Among the vendors keen to serve this dish up hot and fresh is Dynatrace. The company is now delivering software intelligence as-code in a form that includes broad and deep observability, application security and advanced AIOps capabilities, all as code.
Dynatrace says this enables developers who are adopting everything-as-code practices to incorporate software intelligence capabilities into their applications. But what does that mean in terms of live production applications and data services? It means that developers using this approach can automate the orchestration of all resources across the software development lifecycle that are required to deliver cloud-native applications and infrastructure at scale.
In addition, developers can ensure their applications achieve standards and Service Level Objectives (SLOs) for critical metrics, including performance, quality and security, or automatically initiate corrective action when these standards are not met.
“As organizations shift left [to start testing earlier in the technology lifecycle], developers take on the responsibility for ensuring code is high-quality, performant and secure,” said Stephen Elliot, group VP, I&O, cloud operations and DevOps at analyst house IDC. “Simultaneously, they are asked to ship new applications and features quickly. To achieve all requirements and ensure code complies with organizational standards, teams are embracing everything-as-code practices.”
By enabling developers to access libraries of templates for reusable configurations, Dynatrace says it is making it easier for development teams to establish and adhere to organizational best practices for observability and security, without adding friction to the development process. This is made possible through additional Application Programming Interface (API) endpoints, which enable and extend configuration-as-code for multiple Dynatrace capabilities, including anomaly detection and alerting, dashboarding and analytics and data enrichment.
“Organizations adopting practices like GitOps and infrastructure-as-code also require observability and automation-as-code to increase speed and resiliency,” said Steve Tack, SVP of product management at Dynatrace. “Unlike alternative solutions that stop with basic metrics and require manual configuration, Dynatrace extends to intelligent observability, advanced AIOps and application security.
The sum total of all these capabilities is meant to drive real-time actions to ensure IT teams achieve SLOs and optimise critical business metrics. This enables development, DevOps and Site Reliability Engineering (SRE) teams to bring high quality, secure innovations to market faster and at enterprise-scale.
Cloud-native composability is a (real) thing
What all this points to are the realities being witnessed at the coalface of multi-cloud-native development in the real (okay, virtually abstracted) world. CTO for Dynatrace Bernd Greifeneder says that this software community has grown threefold in the past three years, to around six million developers.
But there are challenges says Greifeneder, despite the benefits on offer, cloud-native adopters are realizing that operational complexity is killing (again, virtually, not literally) their developers and DevOps teams. The pressure to shift left and improve the quality of their services means developers now have to think about scalability, security, observability and more, as well as continuing to deliver new features, functions, and innovation in the tech stack.
“To help them succeed, there are now more cloud native solutions out there than there are Pokémon – 1,028 at last count, according to the Cloud Native Computing Foundation (CNCF). The downside is that there are too many choices. There’s nobody who can catch them all or become a master of that many solutions. Without a ‘golden path’, it’s easy for all these tools to invite chaos into the stack, as organizations end up with pockets of tribal knowledge across different engineering teams, making it difficult to connect processes across the software development lifecycle,” said Greifeneder.
For Dynatrace, that challenge manifests itself in the drive to add new extensions to the Dynatrace Hub and build integrations with the most popular solutions across the DevSecOps toolchain. This opens the door for developers and DevOps teams to automate more processes across their tech stack more successfully, as the insights from Dynatrace can feed into more solutions and workflows. Of course, automation is already at the heart of DevOps and Site Reliability Engineering (SRE), but without integration across the toolchain it’s difficult to do that with consistency and accuracy.
Tangled tautological turmoil
In most organizations, Greifeneder says that every team takes its own approach to automation, which means they’re reinventing the wheel over and over, creating scripts on a case-by-case basis and using ‘copy-paste’ versions to quickly plumb together more processes across different tools. This results in a huge, tangled mess of automation code that creates more problems than it solves.
“Developers are forced to waste time ‘tinkering’ with their tools, updating and fixing their automation scripts over and again, pulling them away from all the other work they’re supposed to be doing. That’s why developers need a smarter approach that enables them to build automation into their delivery pipelines, rather than manually adding it as an afterthought,” said the Dynatrace tech leader, in something of a validation perhaps for the need for software intelligence -as-code in the first place.
On the road to more a self-driving cloud (which is a kind of de facto industry mission that almost every enterprise tech vendor now likes to ruminate over and prophesize towards), we may very likely see that automation and intelligence (as an entity) is becoming more a more strategic IT ‘asset’ (or perhaps competency) as organizations have the ability to implement more processes ‘as code’ and create that golden path for their developers and DevOps teams.
This is the point where DevOps can therefore evolve into GitOps, where developers are able to check-in a desired state for their software and leave Kubernetes to do the driving to get it there.
Driving towards dynamic desired state
“However, to make that more effective for supporting the organization’s innovation goals, the desired state needs to change continually – it can’t just remain static, in line with what the organization is doing today. That’s why observability is a crucial piece of the puzzle, providing the data that can be intelligently turned into answers to create a feedback loop, highlighting areas for improvement in real-time,” surmised Dynatrace’s Greifeneder.
As cloud-native complexity inevitably continues to rise, we need to bring the massive amounts of observability data together in full context, to derive answers with the highest precision. Without this, it will be difficult to avoid automating on top of false positives (this is how GitOps could crash the cloud into the wall – and nobody wants that).
Software is still software and intelligence is still intelligence in all its forms (whether that be Artificial Intelligence & Machine Learning or any form of Intelligent Automation directed towards delivering autonomous systems management, Robotic Process Automation or just ‘plain and simple’ chatbots), but we now have software intelligence as-code as a key observability watchtower and control room to direct, drive and deliver.
As clever as it is, software intelligence can get smarter, this might be how.