Created on 2025-09-25 19:33
Published on 2025-09-25 20:05
The way we develop and use software is undergoing a profound transformation. For decades, applications were tightly coupled, monolithic systems where every feature and workflow lived inside a single stack. Today, with the rise of Large Language Models (LLMs) and loose coupling with Data Spaces (databases, knowledge graphs, filesystems, APIs), we are entering an era of componentized software development—where applications are orchestrated from modular Agents, Skills, Tools, and Workflows in a Lego-like fashion.
Traditional software development required domain experts to translate their knowledge into specifications, which programmers would then painstakingly implement. This process was slow, error-prone, and constrained by the inherent lossiness between domain experts and developers. More often than not, the resulting solution became a compounding technical debt vector, propagating inertia rather than the dynamism initially envisioned. Today, LLMs serve as a new generation of UI/UX components for AI Agent orchestrators, capable of using multimodal natural language interactions to drive the assembly of—and interaction with—reusable components within workflows that deliver functional solutions.
A typical sequence in this new paradigm looks like this:
Problem Identification — Define what needs solving.
Agent Description — Specify Agents comprising a collections of Skills that address the problem.
Data Access — Determine how Agents interact with Data Spaces (databases, knowledge graphs, filesystems, APIs) via protocols like MCP, HTTP, ODBC, or JDBC.
Agent Construction — Build the Agent(s) to perform the solution.
Solution Testing and Delivery — Validate and deploy.
In this new approach, LLMs take on much of the scut and grunt work—writing boilerplate code, generating SQL, SPARQL, GraphQL, and other queries, and orchestrating workflow logic, all directed by manifests in natural language (via Markdown). Human guidance focuses on refining subtleties and ensuring alignment with business rules. The result is faster development cycles, reduced opportunity costs, and software that is both precisely tailored and highly reusable—thereby averting the perennial contribution to technical debt seen in the past.
This paradigm effectively modernizes the age-old Model-View-Controller (MVC) pattern. Natural language inputs and LLMs elevate the Controller layer, providing a standard-based orchestration engine that ties together Data Spaces, business logic, and UI interactions. For developers, this means focusing on components rather than monolithic apps. For domain experts, it means describing software behavior in plain language without needing to write a line of code.
Using the OpenLink AI Layer (OPAL) Playground, developers can create an HTML-based interface that combines multiple Agents and Tools.
LLMs generate up to 80% of the solution, while humans refine 20% for nuances like security, transformation, or API key handling.
Solutions can leverage multi-model DBMS platforms like Virtuoso to interact seamlessly with both tabular and graph-based Data Spaces.
The AI Agents built using OPAL are accessible via any MCP- or OpenAPI-compliant client. Below is an initial native session log demonstrating a sanity check to verify that an AI Agent is functioning properly—for example, the one used in the subsequent DBpedia Knowledge Graph related demos:
Here's an example of a Claude Session Interacting with an OPAL AI Agent that produces an interactive page published to a publicly accessible Virtuoso Briefcase (WebDAV Filesystem) folder:
https://www.openlinksw.com/data/html/opal-agent-dbpedia-spike-lee-demo-1.html.
The actual Claude Session log is at: https://claude.ai/share/55747cfd-73cf-4834-a8de-8cf198a9f9c9
This approach drastically reduces technical debt and accelerates the realization of software value. It empowers enterprises to build modular, reusable software workflows, and enables domain experts to participate directly in software design.
Software development is no longer just coding—it’s about assembling intelligent components with natural language as the apex of the UI/UX stack. LLMs aren’t replacing developers; they’re amplifying them, unlocking an era where software creation is faster, smarter, and more aligned with business and domain needs.