⚡ Offline-first · Peer-to-peer · Ultra-fast C++ runtime
⭐ If this vision resonates with you, consider starring the project.
Vix.cpp is a modern C++ runtime designed as a serious alternative to Node.js, Deno, and Bun but engineered from day one for:
- unstable networks
- offline-first environments
- peer-to-peer systems
- extreme native performance
Run applications like Node/Deno/Bun with C++ speed, control, and predictability.
Vix is not just a backend framework. It is a runtime layer for real-world distributed systems.
Vix.cpp is built for developers who:
- Build backend systems in modern C++
- Need predictable performance (no GC pauses)
- Target offline-first or unreliable networks
- Work on edge, local, or P2P systems
- Want a Node/Deno-like DX, but native
If you’ve ever thought “I wish Node was faster and more reliable” Vix is for you.
Most modern runtimes assume:
- stable internet
- cloud-first infrastructure
- predictable latency
- always-online connectivity
That is not reality for much of the world.
Vix.cpp is built for real conditions first.
Vix.cpp is designed to remove overhead, unpredictability, and GC pauses.
| Framework | Requests/sec | Avg Latency |
|---|---|---|
| ⭐ Vix.cpp (pinned CPU) | ~99,000 | 7–10 ms |
| Vix.cpp (default) | ~81,400 | 9–11 ms |
| Go (Fiber) | ~81,300 | ~0.6 ms |
| Deno | ~48,800 | ~16 ms |
| Node.js (Fastify) | ~4,200 | ~16 ms |
| PHP (Slim) | ~2,800 | ~17 ms |
| FastAPI (Python) | ~750 | ~64 ms |
#include <vix.hpp>
using namespace vix;
int main() {
App app;
app.get("/", [](Request&, Response& res){
res.send("Hello from Vix.cpp 🚀");
});
app.run(8080);
}#include <vix/config/Config.hpp>
#include <vix/experimental/ThreadPoolExecutor.hpp>
#include <vix/websocket.hpp>
int main()
{
vix::config::Config cfg{"config/config.json"};
auto exec = vix::experimental::make_threadpool_executor(1, 1, 0);
vix::websocket::Server ws(cfg, std::move(exec));
ws.on_typed_message([&ws](auto &, const std::string &type, const vix::json::kvs &payload){
if (type == "chat.message")
ws.broadcast_json("chat.message", payload);
});
ws.listen_blocking();
}{
"database": {
"default": {
"ENGINE": "mysql",
"NAME": "mydb",
"USER": "myuser",
"PASSWORD": "",
"HOST": "localhost",
"PORT": 3306
}
},
"server": {
"port": 8080,
"request_timeout": 5000
},
"websocket": {
"port": 9090,
"max_message_size": 65536,
"idle_timeout": 600,
"ping_interval": 30,
"enable_deflate": true,
"auto_ping_pong": true
}
}#include <vix/websocket/Client.hpp>
#include <thread>
#include <chrono>
int main()
{
auto c = vix::websocket::Client::create("127.0.0.1", "9090", "/");
c->on_open([c](){
c->send("chat.message", {"text", "hello"});
});
c->connect();
std::this_thread::sleep_for(std::chrono::seconds(5));
}#include <vix.hpp>
#include <vix/websocket/AttachedRuntime.hpp>
using namespace vix;
int main()
{
vix::serve_http_and_ws([](auto& app, auto& ws) {
app.get("/", [](auto&, auto& res) {
res.json({
"message", "Hello from Vix.cpp minimal example",
"framework", "Vix.cpp"
});
});
ws.on_typed_message(
[&ws](auto& session,
const std::string& type,
const vix::json::kvs& payload){
(void)session;
if (type == "chat.message") {
ws.broadcast_json("chat.message", payload);
}
});
});
return 0;
}#include <vix.hpp>
#include <vix/console.hpp>
#include <vix/p2p/Node.hpp>
#include <vix/p2p/P2P.hpp>
using namespace vix;
int main()
{
vix::p2p::NodeConfig cfg;
cfg.node_id = "node-A";
cfg.listen_port = 9001;
auto node = vix::p2p::make_tcp_node(cfg);
vix::p2p::P2PRuntime runtime(node);
runtime.start();
console.info("Node A running on port 9001");
runtime.wait(); // blocks
return 0;
}vix run main.cpp
vix dev main.cppVix.cpp ships as an umbrella runtime composed of multiple modules:
- HTTP Runtime : REST APIs and control plane
- WebSocket Runtime : real-time messaging and synchronization
- P2P Runtime : peer-to-peer networking and transport
- p2p_http : HTTP control plane for P2P introspection
- ORM : native C++ ORM with prepared statements
- CLI : Node-like developer experience
- Cache, Middleware, Utils : core building blocks
- Introduction
- Quick Start
- Architecture & Modules
- ORM Overview
- Benchmarks
- Examples
- Build & Installation
- CLI Options
- CLI Reference
If you believe in modern C++ tooling, offline-first systems, and native performance, please consider starring the repository.
MIT License
