An interview podcast with Dave Duggal, founder of EnterpriseWeb
In 2009, as the Cloud was starting to emerge, Dave Duggal founded EnterpriseWeb to address the challenges of an increasingly fragmented enterprise IT estate. He saw that siloed software stacks were becoming roadblocks to end-to-end interoperability, automation and management. Duggal recognized the need for an abstraction layer that provides “shared understanding” across business silos, partner ecosystems and clouds to enable a composable and agile enterprise.
The implication here is that meaningful interoperation across heterogeneous distributed systems has been rare and difficult to achieve at enterprise scale. Still is, in fact.
Duggal spent years independently researching the subject, reading hundreds of academic papers and working with his chief architect to design a system that made sense from the point of view of abstracting and managing the stack scalably in an automated fashion via a high-level graph model of an organization and its systems environments (i.e., an ontology of concepts, types and policies).
“As a business person, I wanted to look down on the organization and see all the elements that are constructing it: the people, information sources and capabilities so I can flexibly connect them for a variety of use-cases and manage my business,” Duggal said.
14 years later, Duggal and company have yet another telecom industry innovation award to add to those on his shelf: EnterpriseWeb (EWEB) recently won Light Reading’s 2023 Leading Lights “Outstanding AI/ML Use Case” award for its groundbreaking work on enterprise-grade generative AI for intelligent service orchestration.
Duggal gave an example of how its platform leverages generative AI as a conversational interface, while EnterpriseWeb’s ontology provides the domain knowledge to ground the interaction for safe, contextual, deterministic automation.
A telco enterprise customer can make a high-level verbal request, “I want a secure 5G gateway, configured this way, deployed in this cloud or on this edge node, and I want it to meet these SLAs.” without having to care about the technical details. EWEB’s ontology abstracts the complexity and the platform’s runtime fulfills and assures the service. EWEB is the backend. It leverages the ontology to interpret the customer requests, automate decision making and optimize system responses.
In this example, the platform generates a telco-grade service topology, which is a graph of the relevant service elements. The platform presents it to the customer and asks, “is that what you wanted?” The customer can then modify it to their liking – again, verbally, with no code – and EWEB updates the topology accordingly.
Once the customer is happy they can direct EWEB to deploy and manage the service per their SLAs so it is self-scaling, healing and optimizing. All the inherent complexity is abstracted and automated away so the customer can focus on their business needs without worrying about technical details. The demo can be viewed online.
In EWEB’s approach, processing is shifted to EWEB as the backend. EnterpriseWeb leverages the strengths of generative AI as a natural language interface, while mitigating generative AI’s known weaknesses (accuracy, consistency, security, latency, cost, resource and energy consumption). It’s a practical approach that enables organizations to rapidly operationalize generative AI, without compromising on their mission-critical automation systems.
EnterpriseWeb’s ontology is implemented as a hypergraph, which runs in memory, and is persisted in a key-value database. The design is lightweight (the platform is only 50mb) and provides low-latency and high-performance. The cloud-native, event-driven platform deploys as a cluster of pods and can run on-premise, in the cloud, at the edge or on a laptop.
Hope you find the interview as illuminating as I have.