[ad_1]
The need for Effective Engineering for MLOps & AIOps, Observability and Continuous Verification
Enhancing the Digital Highway Blueprint with Effective MLOps Engineering
During projects with our technology partners providing AI-enabled continuous delivery such as Harness and ML/AI-driven observability such as Splunk and Dynatrace, we perceived an increasing need for a wider and systematic application of ML and AI-based systems because:
- Teams are seeking Custom AI and ML implementations not only for Continuous Delivery (CD) and monitoring but for the whole software delivery process.
- Teams have developed a few ML prototypes and solutions but are not comfortable with testing, deploying, and monitoring ML systems in production and need to apply CD, SRE, and OOBASA to data and ML models as well.
- Many solution providers and vendors are claiming AI and ML capabilities, and customers are not used to challenging, evaluating, and integrating such ML/AI-able components in their delivery pipelines.
Therefore, we have developed Effective MLOps as a core expertise and engineering discipline to help customers with the above-introduced ML and AI challenges and to democratize the MLOps practice mastered mainly by leading global digital players.
Enhancing the Digital Highway Blueprint with Effective Observability Engineering
As introduced, the Digital Highway is a blueprint that serves customers to deliver reliable digital systems (cloud-native software, ML- & Data-driven systems, IoT solutions…). It requires a mature and robust observability capability to enable AIOps as a key component of that blueprint.
The term “observability” became very popular a couple of years ago when many (legacy or cloud-native) monitoring, APM, log management, ITOM technology vendors and consortiums such as CNCF (see definition), referred to it as the “must have” capability for DevOps, DataOps and MLOps teams. The marketing-driven adoption of observability by many vendors led to confusion and misunderstanding of observability as a synonym for monitoring in the digital age.
Our first contribution within Swiss Digital Network in 2020 (see blogposts) addressed the confusion around observability by designing and sharing OOBASA with the goal of establishing self-service monitoring and analytics.
After four years of practical projects and continuous applied research, we see increasing needs for observability not only in the monitoring field but along the whole software delivery process and related pipeline. Therefore, we today consider observability as an additional essential pillar of the Digital Highway and as an advanced DevOps engineering practice through the whole DevOps life cycle.
In consequence, we have developed, in collaboration with Digital Architects Zurich, the concept of Effective Observability Engineering to assist DevOps teams in adopting and applying observability to their own pipelines and production environments.
Enhancing the Digital Highway Blueprint with Effective Continuous Verification Engineering
According to a Gartner Study in 2017, “Faster Releases lead to increasing Failures”.
At Swiss Digital Network, we have identified and addressed this challenge already in our first Blueprint for “Digital Highway”. Consequently, we have added Continuous Verification as a key capability to augment the Continuous Delivery pipelines with automatic verification of quality gates related to functionality, performance, or security.

However, after three years of practical projects, mainly by implementing the ML-driven CV capability of our partner Harness, and of applied research related to the Continuous Verification field, we concluded the need to extend the ML-driven Continuous verification as follows:
- Not only functional, performance, and security quality gates, but other aspects of quality assurance and testing need to be enhanced with ML and AI, such as code QA, defect analysis or test design and planning, or event compliance and audit checks.
- In addition to tests, chaos engineering experiments generate a high volume of data that needs to be processed and verified with the help of ML & AI.
Furthermore, the term Continuous Verification has been adopted by several technology providers, however with slightly different definitions and scope:
- GitLab: Continuous verification is the process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.
- Harness’s continuous Verification (CV) approach simplifies verification. Harness uses machine learning to identify normal behavior for your applications. This allows Harness to identify and flag anomalies in future deployments, and to perform automatic rollbacks.
- OpsMx: It verifies software updates across deployment stages, ensuring their safety and reliability in production.
- Chaos Engineering Reference Book (see https://www.oreilly.com/library/view/chaos-engineering/9781492043850/) : Continuous verification is a discipline of proactive experimentation in software, implemented as tooling that verifies system behaviors.
- Verica: Continuous Verification uses experimentation to discover security and availability weaknesses before they become business-disrupting incidents.
To help teams get clarity about a coherent definition and scope, we developed in collaboration with our colleagues from DigitalQ an Effective Continuous Verification approach as a holistic engineering discipline to automate the verification of quality gates and resiliency through the delivery pipelines and to increase the overall testing & SRE productivity with machine learning and AI.
Our forthcoming blog post “Digital Highway 2.0 or the five Engineering Pillars to enable Reliable Digital Solutions”, is scheduled for March 12th.
[ad_2]
Source link