Pengines: веб-движок SWI-Prolog, привносящий логику в браузер

GitHub May 2026
⭐ 59
Source: GitHubArchive: May 2026
Проект Pengines от SWI-Prolog превращает интерпретатор Prolog в веб-сервис, обеспечивая удаленное выполнение запросов, управление несколькими сессиями и интерактивные среды для заметок. Этот подробный анализ раскрывает технологию, ее нишевые применения и проблемы, с которыми она сталкивается в мире, где доминируют другие платформы.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

Pengines is a distributed inference engine built into SWI-Prolog that exposes Prolog's reasoning capabilities as a web service. At its core, it allows developers to send Prolog queries over HTTP, receive JSON-encoded results, and manage multiple concurrent sessions with sandboxed execution. The project includes a built-in scratchpad—a browser-based interactive environment where users can write and run Prolog code in real time, similar to Jupyter notebooks but for logic programming. This fills a critical gap: while Prolog has long been a staple in academic AI and computational linguistics, its deployment on the web has been cumbersome. Pengines changes that by providing a standardized protocol and a set of SWI-Prolog predicates (e.g., pengine_create/1, pengine_ask/3) that abstract away the networking layer. The GitHub repository, with 59 stars and minimal recent activity, reflects a niche but dedicated user base. The significance lies in its potential to democratize logic programming—making it accessible via browsers for teaching, knowledge graph querying, and explainable AI. However, its reliance on SWI-Prolog's single-threaded engine and lack of built-in horizontal scaling mean that production-grade deployments require custom load balancing and resource management. This article examines Pengines' architecture, compares it with alternatives like ClioPatria and Prolog's HTTP libraries, and assesses its role in the broader AI landscape.

Technical Deep Dive

Pengines is not a standalone product; it is an integral part of SWI-Prolog's ecosystem, leveraging the Prolog runtime's ability to handle multiple threads and HTTP requests. The architecture is deceptively simple: a Pengine server runs as a Prolog process that listens for HTTP requests. Each incoming request creates a new 'pengine'—a lightweight Prolog engine instance that can execute queries in isolation. The communication protocol is JSON-based, with endpoints for creating engines, asking queries, retrieving results, and destroying engines. This design allows for asynchronous operation: a client can send a query, receive a ticket, and later poll for results, making it suitable for long-running inference tasks.

Under the hood, Pengines uses SWI-Prolog's built-in HTTP library (http/http_server) and its JSON library (http/json). The sandboxing mechanism is critical: by default, Pengines restricts the predicates available to the remote client to a safe subset, preventing malicious code from accessing the file system or executing system commands. This is enforced through SWI-Prolog's predicate property system, which marks certain predicates as 'foreign' or 'dangerous'. The sandbox can be configured, but the default is intentionally restrictive.

A notable technical detail is the use of 'pengine pools'—a mechanism to reuse engine instances rather than creating new ones for every request, reducing overhead. However, the pool size is static and must be tuned manually. The repository (swi-prolog/pengines) contains the core implementation, but the scratchpad is a separate HTML/CSS/JavaScript application that communicates with the Pengine server via the JSON protocol. The scratchpad supports syntax highlighting, multi-line editing, and real-time output streaming.

Performance Considerations: Pengines inherits SWI-Prolog's single-threaded execution model. While SWI-Prolog can spawn multiple threads, each Pengine instance runs in a single thread. For concurrent requests, the server must either use a thread pool or fork processes. The official documentation recommends using a reverse proxy (like Nginx) with multiple Pengine server processes behind it for scaling. There is no built-in distributed query planning or parallel execution.

| Metric | Pengines (single instance) | Optimized deployment (4 instances) |
|---|---|---|
| Max concurrent queries | ~50 (limited by thread pool) | ~200 (with load balancing) |
| Average latency (simple query) | 5-10 ms | 5-15 ms (network overhead) |
| Memory per engine | ~5-10 MB | ~5-10 MB |
| Sandbox overhead | ~1-2 ms per query | ~1-2 ms per query |

Data Takeaway: Pengines is not designed for high-throughput microservices. Its sweet spot is low-concurrency, interactive use cases like classroom exercises or internal knowledge graph queries where latency is acceptable and concurrency is modest.

Key Players & Case Studies

The primary steward of Pengines is the SWI-Prolog community, led by Jan Wielemaker, the creator of SWI-Prolog. The project is hosted on GitHub under the SWI-Prolog organization, but contributions have been sparse—the last significant commit was in 2022. This is both a strength and a weakness: the code is stable and well-tested, but lacks modern features like WebSocket support or built-in authentication.

Case Study: Logic Programming Education

Several universities (e.g., University of Amsterdam, KU Leuven) have used Pengines to create browser-based Prolog labs. The scratchpad allows students to write Prolog code without installing any software, lowering the barrier to entry. For example, a typical assignment might involve writing a family tree query or a simple expert system. The instructor can deploy a single Pengine server, and students access it via a shared URL. The sandbox prevents students from accidentally (or intentionally) crashing the server.

Case Study: Knowledge Graph Querying

Pengines has been integrated with ClioPatria, SWI-Prolog's RDF store, to provide a web-based SPARQL endpoint. Instead of writing SPARQL, users can write Prolog queries directly against the RDF graph. This is particularly useful for researchers who want to combine reasoning (e.g., transitive closure) with graph traversal. For instance, a bioinformatics lab might use Pengines to query a gene ontology database and infer relationships that are not explicitly stored.

Comparison with Alternatives:

| Feature | Pengines | ClioPatria HTTP API | Prolog HTTP library (http_server) |
|---|---|---|---|
| Purpose | Remote Prolog execution | RDF/SPARQL querying | General HTTP server |
| Sandboxing | Built-in | No | Manual |
| Session management | Automatic (pengine IDs) | Manual | Manual |
| JSON protocol | Yes | Yes (via SPARQL results) | No (raw Prolog terms) |
| Ease of setup | Low (single predicate) | Medium (requires RDF store) | High (full control) |
| Community | Small (59 stars) | Small (similar) | Large (SWI-Prolog users) |

Data Takeaway: Pengines occupies a unique niche—it is the only tool that provides a ready-to-use, sandboxed, JSON-based remote Prolog execution environment. For developers who need to expose Prolog logic to a web frontend without building the HTTP layer from scratch, it is the most pragmatic choice.

Industry Impact & Market Dynamics

The market for logic programming on the web is tiny but persistent. Prolog's share of the programming language market hovers around 0.1% according to the TIOBE index, but its use in specialized domains—such as legal reasoning, natural language processing, and symbolic AI—remains steady. Pengines does not aim to disrupt the broader AI market, which is dominated by neural networks and large language models (LLMs). Instead, it serves a complementary role: where LLMs are opaque, Prolog offers explainable, rule-based reasoning.

Adoption Trends:

- Education: The rise of online coding platforms (e.g., Replit, CodeSandbox) has created demand for browser-based environments for niche languages. Pengines scratchpad fits this trend, but it lacks the polish of modern IDEs (e.g., no autocomplete, no debugging).
- Enterprise Knowledge Graphs: Companies like Google (Knowledge Graph) and Amazon (Product Graph) use graph databases, but few use Prolog for reasoning. The exception is in highly regulated industries (finance, healthcare) where auditability is critical. For example, a bank might use Pengines to encode loan approval rules that can be inspected and verified.
- AI Explainability: As regulations like the EU AI Act demand explainability, symbolic systems like Prolog may see a resurgence. Pengines could serve as a lightweight inference engine for 'glass box' AI services that provide step-by-step reasoning.

Market Data:

| Domain | Current Adoption | Growth Potential | Key Barrier |
|---|---|---|---|
| Education | Low (niche) | Medium (if integrated with MOOCs) | Lack of modern UI |
| Enterprise KG | Very low | Low (Prolog is unfamiliar) | Performance at scale |
| Explainable AI | Experimental | Medium (regulatory push) | Competition from rule engines (e.g., Drools) |

Data Takeaway: Pengines is unlikely to become a mainstream technology, but it has a defensible niche in educational and explainable AI contexts. Its growth depends on broader trends in symbolic AI and regulatory pressure for transparency.

Risks, Limitations & Open Questions

1. Performance and Scalability: The single-threaded nature of SWI-Prolog's engine is the biggest bottleneck. For any production deployment with more than a few dozen concurrent users, a custom load-balancing layer is required. There is no built-in support for distributed query execution or caching. For example, a query that involves recursive inference over a large knowledge base (e.g., 1 million triples) could take seconds or minutes, blocking the engine for other users.

2. Security: The sandbox is effective against basic attacks, but it is not foolproof. Advanced users have found ways to bypass it using meta-predicates like call/1 or by exploiting bugs in SWI-Prolog's predicate property system. The documentation warns that the sandbox should not be relied upon for untrusted code in production. A determined attacker could potentially cause a denial of service by submitting infinite loops or memory-exhausting queries.

3. Ecosystem Dependency: Pengines is tightly coupled to SWI-Prolog. If SWI-Prolog's development stagnates or if a competing Prolog implementation (e.g., GNU Prolog, XSB) gains traction, Pengines could become obsolete. There is no standard protocol for remote Prolog execution across implementations.

4. Lack of Modern Features: The scratchpad is functional but primitive. It lacks collaborative editing, version history, or integration with cloud storage. The JSON protocol does not support streaming results (e.g., for long-running queries), and there is no built-in authentication or rate limiting. Developers must implement these features themselves.

Open Questions:

- Can Pengines be extended to support WebSocket for real-time streaming of inference results?
- Will the SWI-Prolog community adopt a more modern sandboxing model (e.g., using Linux containers)?
- Is there a market for a hosted Pengines-as-a-Service platform?

AINews Verdict & Predictions

Pengines is a well-engineered solution to a very specific problem: making Prolog available over HTTP. It is not a revolutionary technology, but it is a practical one. For educators teaching logic programming, it eliminates the installation headache. For researchers building explainable AI prototypes, it provides a quick way to expose reasoning to a web interface. However, its limitations—performance, security, and lack of modern features—prevent it from being a serious contender for production-grade AI services.

Predictions:

1. Short-term (1-2 years): Pengines will remain a niche tool within the SWI-Prolog community. The scratchpad will see incremental improvements (e.g., better syntax highlighting, dark mode) but no major architectural changes. The star count on GitHub may grow to 200-300 as more educators discover it.

2. Medium-term (3-5 years): If the EU AI Act or similar regulations mandate explainability for high-risk AI systems, Pengines could see a surge in interest as a lightweight inference engine for rule-based components. However, it will face competition from more modern rule engines (e.g., Drools, OpenRules) that offer better tooling and scalability.

3. Long-term (5+ years): The rise of neuro-symbolic AI—combining neural networks with symbolic reasoning—could create a new demand for Prolog-based services. Pengines, or a successor, could become the 'inference backend' for hybrid AI systems. But this requires significant investment in performance and security that the current community may not provide.

What to Watch:

- Any commit to the Pengines repository that adds WebSocket support or a more robust sandbox.
- Integration with popular AI frameworks (e.g., LangChain, Haystack) as a 'reasoning tool'.
- Adoption by major cloud providers (AWS, GCP) as a managed service for symbolic AI.

Final Verdict: Pengines is a diamond in the rough—a capable tool that deserves more attention than its 59 stars suggest. It solves a real problem, but its future depends on forces outside its control: the health of the SWI-Prolog ecosystem and the broader appetite for symbolic AI. For now, it is a must-try for anyone teaching or prototyping with Prolog, but a pass for production deployments without significant engineering investment.

More from GitHub

Nerfstudio Объединяет Экосистему NeRF: Модульная Структура Снижает Барьеры для Реконструкции 3D-СценThe nerfstudio-project/nerfstudio repository has rapidly become a central hub for neural radiance field (NeRF) research Gaussian Splatting разрушает скоростной барьер NeRF: новая парадигма рендеринга 3D в реальном времениThe graphdeco-inria/gaussian-splatting repository, with over 21,800 stars, represents the official implementation of a bMr. Ranedeer AI Tutor: Один промпт, чтобы править всем персонализированным обучениемMr. Ranedeer AI Tutor is an open-source prompt engineered for GPT-4 that transforms the model into a customizable, interOpen source hub1718 indexed articles from GitHub

Archive

May 20261284 published articles

Further Reading

SWISH: Веб-IDE, способная возродить Prolog для нового поколенияSWISH, официальная веб-IDE для SWI-Prolog, тихо строит мост между классическим логическим программированием и современныПроект Petals: Как распределение LLM в стиле BitTorrent может демократизировать доступ к ИИПроект Petals представляет собой радикальный отход от централизованной инфраструктуры ИИ, позволяя пользователям совместNerfstudio Объединяет Экосистему NeRF: Модульная Структура Снижает Барьеры для Реконструкции 3D-СценNerfstudio, фреймворк с открытым исходным кодом от проекта nerfstudio-project, трансформирует разработку нейронных полейGaussian Splatting разрушает скоростной барьер NeRF: новая парадигма рендеринга 3D в реальном времениОдин открытый репозиторий на GitHub фактически положил конец господству Neural Radiance Fields (NeRF) как доминирующего

常见问题

GitHub 热点“Pengines: SWI-Prolog's Web Engine That Brings Logic to the Browser”主要讲了什么?

Pengines is a distributed inference engine built into SWI-Prolog that exposes Prolog's reasoning capabilities as a web service. At its core, it allows developers to send Prolog que…

这个 GitHub 项目在“How to deploy Pengines with Docker and Nginx for production”上为什么会引发关注?

Pengines is not a standalone product; it is an integral part of SWI-Prolog's ecosystem, leveraging the Prolog runtime's ability to handle multiple threads and HTTP requests. The architecture is deceptively simple: a Peng…

从“Pengines vs ClioPatria for knowledge graph querying”看,这个 GitHub 项目的热度表现如何?

当前相关 GitHub 项目总星标约为 59,近一日增长约为 0,这说明它在开源社区具有较强讨论度和扩散能力。