Dynatrace AppEngine puts low-code, data-driven apps into gear

Dynatrace AppEngine puts low-code, data-driven apps into gear

In this photo the Dynatrace logo seen displayed on a smartphone.
Image: Rafael Henrique/Adobe Stock

Software automation has been on something of a journey. It started with low-code — the ability to harness automated accelerators, reference templates and pre-composed elements of software application development architecture to speed the total process of software engineering and its subsequent stages. Those subsequent stages are in areas such as user acceptance testing and wider application enhancement or integration.

Then, we started to push low-code into more defined areas of application development. This was an era of low-code software — and in some instances, no-code, where drag-and-drop abstraction functionality existed — where tools were built to be more specifically precision-engineered to a variety of application use case types. This recent period saw the software industry move low-code into zones such as machine learning and artificial intelligence.

We have also been through cycles of low code built specifically to serve edge compute deployments in the Internet of Things and other areas, such as architectures engineered to serve data-intensive analytics applications. That latest data era is now.

Jump to:

What is Dynatrace’s AppEngine?

Software intelligence company Dynatrace has launched its AppEngine service for developers working to create data-driven applications. This low-code offering is built to create custom-engineered, fully compliant data-driven applications for businesses.

The company describes AppEngine as a technology within its platform that enables customers to create “custom apps” that can address BizDevSecOps use cases and unlock the wealth of insights available in the explosive amounts of data generated by modern cloud ecosystems.

Was that BizDevSecOps? Well, yes. It’s the coming together of developer and operations functions with an essential interlacing of application operational security. This is security in the sense of supply chain robustness and stringent data privacy, not the cyber defense malware type of security.

The clue is in the name with BizDevSecOps. It involves business users as a means of a) bringing user software requirements closer to the DevOps process, b) progressing software development and operations forward into a more evolved state such that it is capable of delivering to “business outcomes,” some of which may be simply related to profit, but some hopefully also aligned to developing for the greater good of people and the planet and c) to keep users happy.

SEE: Hiring kit: Back-end Developer (TechRepublic Premium)

A new virtualization reality

Why is all this happening? Because as we move to the world of cloud-native application development and deployment, we need to be able to monitor our cloud service’s behavior, status, health and robustness. It is, somewhat unarguably, the only way we can put reality into virtualization.

According to analyst house Gartner, the need for data to enable better decisions by different teams within IT and outside IT is causing an “evolution” of monitoring. In this case, IT means DevOps, infrastructure and operations, plus site reliability engineering specialists.

As data observability is becoming a process and function needed more holistically throughout an entire organization and across multiple teams, we are also seeing the increased use of analytics and dashboards. This is all part of the backdrop to Dynatrace’s low-code data analytics approach.

“The Dynatrace platform has always helped IT, development, business and security teams succeed by delivering precise answers and intelligent automation across their complex and dynamic cloud ecosystems,” said Bernd Greifeneder, founder and chief technical officer at Dynatrace.

Looking at how we can weave together disparate resources in the new world of containerized cloud computing, Dynatrace explains that its platform consolidates observability, security and business data with full context and dependency mapping. This is designed to free developers from manual approaches, like tagging, to connect siloed data, using imprecise machine learning analytics and the high operational costs of other solutions.

“AppEngine leverages this data and simplifies intelligent app creation and integrations for teams throughout an organization. It provides automatic scalability, runtime application security, safe connections and integrations across hybrid and multi-cloud ecosystems, and full lifecycle support, including security and quality certifications,” the company said in a press statement.

What is causal AI?

The use of causal AI is central to what Dynatrace has come to market with here. In the simplest terms, causal AI is an artificial intelligence system that can explain cause and effect. It can help explain decision-making and the causes behind a decision. Not quite the same as explainable AI, causal AI is a more holistic type of intelligence.

“Causal AI identifies the underlying web of causes of a behavior or event and furnishes critical insights that predictive models fail to provide,” writes the Stanford Social Innovation Review.

This is AI that draws upon causal inference — intelligence that defines and determines the independent effect of a specific thing or event and its relationship to other things as an entity or component within a larger system and universe of things.

Dynatrace says that the sum result of all this product development is that, for the first time, any team in an organization can leverage causal AI to create intelligent apps and integrations for use cases and technologies specific to their unique business requirements and technology stacks.

The petabyte-to-yottabyte chasm

Dynatrace founder and CTO Greifeneder puts all this discussion into context. He talks about the burden companies face when they first attempt to work with the “massively heterogeneous” stacks of data they now need to ingest and analyze. In what almost feels redolent of the Y2K problem, we’re now at the tipping point where organizations need to cross the chasm from petabytes to yottabytes.

“This move in data magnitude represents a massively disruptive event for organizations of all types,” Greifeneder said while speaking at his company’s Dynatrace Perform 2023 event this month in Las Vegas. “It’s massive because existing database structures and architectures are not able to store this amount of data, or indeed, perform the analytics functions needed to extract insight and value from it. The nature of even the most modern database indices was never engineered for this.”

Opening up to how the internal roadmap development strategy at Dynatrace has been working, Greifeneder says that the company didn’t necessarily want to build its Grail data lakehouse technology, but it realized that it had to. By offering the size and scope of data lake storage with the kind of data query ability found in more managed smaller data marts, or data warehouses, Dynatrace Grail is therefore a data lakehouse.

By offering a schema-less ability to perform queries, users are able to “ask questions” of their data resources without having to perform the schema design requirements they would normally have to undertake using a traditional relational database management system. Dynatrace calls it schema-on-read. As it sounds, a user is able to apply a schema to a data query at the actual point of looking for data in its raw state.

“I wouldn’t call it raw data — I would prefer to call it data in its full state of granularity,” Greifeneder explained. “By keeping data away from processes designed to ‘bucketize’ or dumb-down information, we are able to work with data in its purest state. This is why we have built the Dynatrace platform to be able to handle huge cardinality, or work with datasets that may have many regular values, but a few huge values.”

Huge cardinality

Explaining what cardinality means in this sense is enlightening. Ordinal numbers express sequence — think first, second or third — while cardinal numbers simply express value.

As an illustrative example, Greifeneder says we might think of an online shopping system with 100,000 users. In that web store, we know that some purchases are frequent and regular, but some are infrequent and may be for less popular items too. Crucially though, regardless of frequency, all 100,000 users do make a purchase in any one year.

To track all those users and build a time-series database capable of logging who does what and when, organizations would typically bucketize and dumb-down the outliers faced with the huge cardinality challenge.

Dynatrace says that’s not a problem with its platform; it’s engineered for it from the start. All of this is happening at the point of us crossing the petabyte-to-yottabyte chasm. It sounds like we need new grappling hooks.

Source of Article