Conversation
WalkthroughMaster now centralizes lobby aggregation and scheduling via a new MasterLobbyService; workers run WorkerLobbyService to serve Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant LobbySocket as LobbySocket (client)
participant Worker as Worker Process
participant WorkerLobbySvc as WorkerLobbyService
participant Master as MasterLobbyService
participant GameMgr as GameManager
Client->>LobbySocket: start()
LobbySocket->>LobbySocket: fetch server config, pick worker path
LobbySocket->>Worker: WS connect -> /w{N}/lobbies
Worker->>WorkerLobbySvc: route upgrade to /lobbies
WorkerLobbySvc->>Client: send PublicGames (WS)
Client->>LobbySocket: receive + validate PublicGamesSchema
LobbySocket->>Client: onLobbiesUpdate(publicGames)
WorkerLobbySvc->>Master: sendMyLobbiesToMaster (IPC: lobbyList / workerReady)
Master->>Master: aggregate lobbies, maybeScheduleLobby loop
Master->>Worker: send createGame IPC (gameID, gameConfig, startsAt)
Worker->>GameMgr: createGame(..., startsAt)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
e10aa22 to
eed0c4f
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@src/client/LobbySocket.ts`:
- Around line 12-15: The getRandomWorkerPath function can produce `/wNaN` or
out-of-range indices when numWorkers is 0 or invalid; update getRandomWorkerPath
to validate numWorkers (ensure it's a finite integer >= 1), default to 1 (or a
safe fallback like 1) when invalid, and clamp/select an index in the range [0,
numWorkers-1] so the returned path is always `/w0`..`/wN`. Locate and update
getRandomWorkerPath and any other place that calls it (the other occurrence
referenced at line 23) to use the same validation/guard logic or call a shared
validated helper.
- Line 1: The call to getServerConfigFromClient inside LobbySocket.start() can
throw and currently aborts before connectWebSocket and polling begin; wrap calls
to getServerConfigFromClient (used in start() and any other locations noted) in
a try/catch, assign a safe default server config or signal fallback mode when an
exception occurs, and ensure connectWebSocket() and the HTTP polling startup
logic still run (or start fallback polling) even on error; update the error
handling in the start() method and the other occurrences (around lines where
getServerConfigFromClient is called) to catch exceptions, log the error, and
proceed with defaultConfig/fallbackPolling so the lobby updates still start.
In `@src/server/RedisClient.ts`:
- Around line 148-169: In getAllLobbyInfo, corrupted JSON values for lobby
entries are left in Redis and repeatedly logged; update the catch block for
JSON.parse failures to remove the bad record by calling await
redis.hdel(LOBBY_INFO_KEY, gameID) (and then log the error with context), so
replace the current log.error-only behavior with deletion of the key using
redis.hdel before or after logging.
- Around line 83-101: In getPendingGamesForWorker, when JSON.parse fails for a
pending-game entry you should delete the corrupted record from Redis to stop
repeated errors; inside the catch in getPendingGamesForWorker (function name)
call redis.hdel(PENDING_GAMES_KEY, gameID) (or the equivalent async delete
method) after logging the parse failure, and handle/delete any error from hdel
gracefully so the function still returns the remaining valid PendingGame[].
- Around line 130-136: Replace the loose unknown type on LobbyInfo.gameConfig
with the concrete GameConfig | undefined union: update the LobbyInfo interface
(symbol: LobbyInfo) to declare gameConfig: GameConfig | undefined and add an
import for GameConfig from src/core/Schemas.ts (or the module that exports it)
so call sites no longer need unsafe assertions and TypeScript can type-check
game configuration fields.
In `@src/server/Worker.ts`:
- Around line 60-75: In server.on("upgrade") add a guard that parses the
incoming pathname for a worker prefix like "/wX" (extract the first path segment
from pathname) and compare it to the current process/instance worker identifier
(e.g., this.workerPrefix / workerId); if the prefix is present and does not
match the local worker, immediately reject the upgrade and destroy the socket
(do not call wss.handleUpgrade or lobbiesWss.handleUpgrade); otherwise continue
routing to lobbiesWss or wss as before. Ensure the check runs before the
existing pathname === "/lobbies" / endsWith("/lobbies") logic so mismatched
worker-prefixed requests can’t be accepted by the wrong WebSocketServer.
- Around line 172-206: startGameInfoPolling is not removing full lobbies from
Redis, causing the master to keep stale public IDs; after obtaining info =
game.gameInfo() check whether the lobby is full (e.g., info.clients?.length >=
info.gameConfig?.maxPlayers or other capacity field on gameConfig) and if so
call removePublicLobbyID(gameID) and removeLobbyInfo(gameID) (same as the
existing removal for started lobbies). Update the polling loop in
startGameInfoPolling / startPolling right after setLobbyInfo to detect full
lobbies and remove their IDs so Redis no longer advertises joinable lobbies from
this worker.
🧹 Nitpick comments (2)
src/server/Master.ts (1)
176-186: Make Redis writes for new lobbies atomic or rollback.If
addPendingGamesucceeds butaddPublicLobbyIDfails, a pending game is left behind and never advertised. Use Redis MULTI or rollback withremovePendingGame.🛠️ Simple rollback approach
import { addPendingGame, addPublicLobbyID, + removePendingGame, getPublicLobbyIDs, } from "./RedisClient"; async function schedulePublicGame() { const gameID = generateID(); try { const gameConfig = await playlist.gameConfig(); await addPendingGame(gameID, gameConfig); - await addPublicLobbyID(gameID); + try { + await addPublicLobbyID(gameID); + } catch (error) { + await removePendingGame(gameID); + throw error; + } log.info(`Scheduled public game ${gameID} via Redis`); } catch (error) { log.error(`Failed to schedule public game ${gameID}:`, error); throw error; } }src/server/RedisClient.ts (1)
7-59: Reduce duplicate Redis config and allow env overrides.Both branches repeat the same options and hardcode prod host/port. A small shared config keeps this clean and more deploy‑friendly.
♻️ Proposed refactor
-const isDevMode = process.env.GAME_ENV === "dev"; +const isDevMode = process.env.GAME_ENV === "dev"; +const redisOptions = { + maxRetriesPerRequest: 3, + retryStrategy(times: number) { + const delay = Math.min(times * 50, 2000); + return delay; + }, +}; +const prodRedisHost = process.env.REDIS_HOST ?? "127.0.0.1"; +const prodRedisPort = Number(process.env.REDIS_PORT ?? 6379); ... - redis = new Redis({ - host, - port, - maxRetriesPerRequest: 3, - retryStrategy(times) { - const delay = Math.min(times * 50, 2000); - return delay; - }, - }); + redis = new Redis({ host, port, ...redisOptions }); ... - redis = new Redis({ - host: "127.0.0.1", - port: 6379, - maxRetriesPerRequest: 3, - retryStrategy(times) { - const delay = Math.min(times * 50, 2000); - return delay; - }, - }); + redis = new Redis({ host: prodRedisHost, port: prodRedisPort, ...redisOptions });
| function getRandomWorkerPath(numWorkers: number): string { | ||
| const workerIndex = Math.floor(Math.random() * numWorkers); | ||
| return `/w${workerIndex}`; | ||
| } |
There was a problem hiding this comment.
Guard against invalid worker count.
If numWorkers is 0/NaN, the path can become /wNaN or point to a non‑existent worker. Add a small guard and fallback.
🛠️ Suggested guard
function getRandomWorkerPath(numWorkers: number): string {
- const workerIndex = Math.floor(Math.random() * numWorkers);
- return `/w${workerIndex}`;
+ if (!Number.isFinite(numWorkers) || numWorkers < 1) {
+ console.warn("Invalid numWorkers, defaulting to /w0");
+ return "/w0";
+ }
+ const workerIndex = Math.floor(Math.random() * numWorkers);
+ return `/w${workerIndex}`;
}Also applies to: 23-23
🤖 Prompt for AI Agents
In `@src/client/LobbySocket.ts` around lines 12 - 15, The getRandomWorkerPath
function can produce `/wNaN` or out-of-range indices when numWorkers is 0 or
invalid; update getRandomWorkerPath to validate numWorkers (ensure it's a finite
integer >= 1), default to 1 (or a safe fallback like 1) when invalid, and
clamp/select an index in the range [0, numWorkers-1] so the returned path is
always `/w0`..`/wN`. Locate and update getRandomWorkerPath and any other place
that calls it (the other occurrence referenced at line 23) to use the same
validation/guard logic or call a shared validated helper.
src/server/RedisClient.ts
Outdated
| export async function getPendingGamesForWorker( | ||
| workerIndex: (gameID: string) => number, | ||
| workerId: number, | ||
| ): Promise<PendingGame[]> { | ||
| await redisReady; | ||
| const allGames = await redis.hgetall(PENDING_GAMES_KEY); | ||
| const games: PendingGame[] = []; | ||
|
|
||
| for (const [gameID, value] of Object.entries(allGames)) { | ||
| if (workerIndex(gameID) === workerId) { | ||
| try { | ||
| games.push(JSON.parse(value) as PendingGame); | ||
| } catch (e) { | ||
| log.error(`Failed to parse pending game ${gameID}:`, e); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return games; |
There was a problem hiding this comment.
Clean up corrupted pending-game records.
If JSON parsing fails, the bad entry stays forever and keeps logging. Remove it on parse failure.
🐛 Suggested fix
if (workerIndex(gameID) === workerId) {
try {
games.push(JSON.parse(value) as PendingGame);
} catch (e) {
log.error(`Failed to parse pending game ${gameID}:`, e);
+ await redis.hdel(PENDING_GAMES_KEY, gameID);
}
}🤖 Prompt for AI Agents
In `@src/server/RedisClient.ts` around lines 83 - 101, In
getPendingGamesForWorker, when JSON.parse fails for a pending-game entry you
should delete the corrupted record from Redis to stop repeated errors; inside
the catch in getPendingGamesForWorker (function name) call
redis.hdel(PENDING_GAMES_KEY, gameID) (or the equivalent async delete method)
after logging the parse failure, and handle/delete any error from hdel
gracefully so the function still returns the remaining valid PendingGame[].
src/server/RedisClient.ts
Outdated
| export async function getAllLobbyInfo(): Promise<LobbyInfo[]> { | ||
| await redisReady; | ||
| const allInfo = await redis.hgetall(LOBBY_INFO_KEY); | ||
| const lobbies: LobbyInfo[] = []; | ||
| const now = Date.now(); | ||
|
|
||
| for (const [gameID, value] of Object.entries(allInfo)) { | ||
| try { | ||
| const info = JSON.parse(value) as LobbyInfo; | ||
| // Skip stale entries (older than 5 seconds) | ||
| if (now - info.updatedAt > 5000) { | ||
| await redis.hdel(LOBBY_INFO_KEY, gameID); | ||
| continue; | ||
| } | ||
| lobbies.push(info); | ||
| } catch (e) { | ||
| log.error(`Failed to parse lobby info ${gameID}:`, e); | ||
| } | ||
| } | ||
|
|
||
| return lobbies; | ||
| } |
There was a problem hiding this comment.
Clean up corrupted lobby-info records on parse failure.
Right now bad JSON stays and will keep logging. Delete it on failure.
🐛 Suggested fix
try {
const info = JSON.parse(value) as LobbyInfo;
// Skip stale entries (older than 5 seconds)
if (now - info.updatedAt > 5000) {
await redis.hdel(LOBBY_INFO_KEY, gameID);
continue;
}
lobbies.push(info);
} catch (e) {
log.error(`Failed to parse lobby info ${gameID}:`, e);
+ await redis.hdel(LOBBY_INFO_KEY, gameID);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function getAllLobbyInfo(): Promise<LobbyInfo[]> { | |
| await redisReady; | |
| const allInfo = await redis.hgetall(LOBBY_INFO_KEY); | |
| const lobbies: LobbyInfo[] = []; | |
| const now = Date.now(); | |
| for (const [gameID, value] of Object.entries(allInfo)) { | |
| try { | |
| const info = JSON.parse(value) as LobbyInfo; | |
| // Skip stale entries (older than 5 seconds) | |
| if (now - info.updatedAt > 5000) { | |
| await redis.hdel(LOBBY_INFO_KEY, gameID); | |
| continue; | |
| } | |
| lobbies.push(info); | |
| } catch (e) { | |
| log.error(`Failed to parse lobby info ${gameID}:`, e); | |
| } | |
| } | |
| return lobbies; | |
| } | |
| export async function getAllLobbyInfo(): Promise<LobbyInfo[]> { | |
| await redisReady; | |
| const allInfo = await redis.hgetall(LOBBY_INFO_KEY); | |
| const lobbies: LobbyInfo[] = []; | |
| const now = Date.now(); | |
| for (const [gameID, value] of Object.entries(allInfo)) { | |
| try { | |
| const info = JSON.parse(value) as LobbyInfo; | |
| // Skip stale entries (older than 5 seconds) | |
| if (now - info.updatedAt > 5000) { | |
| await redis.hdel(LOBBY_INFO_KEY, gameID); | |
| continue; | |
| } | |
| lobbies.push(info); | |
| } catch (e) { | |
| log.error(`Failed to parse lobby info ${gameID}:`, e); | |
| await redis.hdel(LOBBY_INFO_KEY, gameID); | |
| } | |
| } | |
| return lobbies; | |
| } |
🤖 Prompt for AI Agents
In `@src/server/RedisClient.ts` around lines 148 - 169, In getAllLobbyInfo,
corrupted JSON values for lobby entries are left in Redis and repeatedly logged;
update the catch block for JSON.parse failures to remove the bad record by
calling await redis.hdel(LOBBY_INFO_KEY, gameID) (and then log the error with
context), so replace the current log.error-only behavior with deletion of the
key using redis.hdel before or after logging.
src/server/Worker.ts
Outdated
| const wss = new WebSocketServer({ noServer: true }); | ||
| const lobbiesWss = new WebSocketServer({ noServer: true }); | ||
|
|
||
| // Handle upgrade requests to route to correct WebSocket server | ||
| server.on("upgrade", (request, socket, head) => { | ||
| const pathname = request.url ?? ""; | ||
| if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) { | ||
| lobbiesWss.handleUpgrade(request, socket, head, (ws) => { | ||
| lobbiesWss.emit("connection", ws, request); | ||
| }); | ||
| } else { | ||
| wss.handleUpgrade(request, socket, head, (ws) => { | ||
| wss.emit("connection", ws, request); | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Validate worker prefix on WebSocket upgrades.
Upgrade handling bypasses the /wX middleware, so a mismatched path can be accepted by the wrong worker. Parse /wX and reject mismatches before routing.
🛠️ Suggested guard
server.on("upgrade", (request, socket, head) => {
- const pathname = request.url ?? "";
+ const pathname = request.url ?? "";
+ const match = pathname.match(/^\/w(\d+)(\/.*)?$/);
+ if (match && parseInt(match[1]) !== workerId) {
+ socket.destroy();
+ return;
+ }
if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) {
lobbiesWss.handleUpgrade(request, socket, head, (ws) => {
lobbiesWss.emit("connection", ws, request);
});
} else {
wss.handleUpgrade(request, socket, head, (ws) => {
wss.emit("connection", ws, request);
});
}
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const wss = new WebSocketServer({ noServer: true }); | |
| const lobbiesWss = new WebSocketServer({ noServer: true }); | |
| // Handle upgrade requests to route to correct WebSocket server | |
| server.on("upgrade", (request, socket, head) => { | |
| const pathname = request.url ?? ""; | |
| if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) { | |
| lobbiesWss.handleUpgrade(request, socket, head, (ws) => { | |
| lobbiesWss.emit("connection", ws, request); | |
| }); | |
| } else { | |
| wss.handleUpgrade(request, socket, head, (ws) => { | |
| wss.emit("connection", ws, request); | |
| }); | |
| } | |
| }); | |
| const wss = new WebSocketServer({ noServer: true }); | |
| const lobbiesWss = new WebSocketServer({ noServer: true }); | |
| // Handle upgrade requests to route to correct WebSocket server | |
| server.on("upgrade", (request, socket, head) => { | |
| const pathname = request.url ?? ""; | |
| const match = pathname.match(/^\/w(\d+)(\/.*)?$/); | |
| if (match && parseInt(match[1]) !== workerId) { | |
| socket.destroy(); | |
| return; | |
| } | |
| if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) { | |
| lobbiesWss.handleUpgrade(request, socket, head, (ws) => { | |
| lobbiesWss.emit("connection", ws, request); | |
| }); | |
| } else { | |
| wss.handleUpgrade(request, socket, head, (ws) => { | |
| wss.emit("connection", ws, request); | |
| }); | |
| } | |
| }); |
🤖 Prompt for AI Agents
In `@src/server/Worker.ts` around lines 60 - 75, In server.on("upgrade") add a
guard that parses the incoming pathname for a worker prefix like "/wX" (extract
the first path segment from pathname) and compare it to the current
process/instance worker identifier (e.g., this.workerPrefix / workerId); if the
prefix is present and does not match the local worker, immediately reject the
upgrade and destroy the socket (do not call wss.handleUpgrade or
lobbiesWss.handleUpgrade); otherwise continue routing to lobbiesWss or wss as
before. Ensure the check runs before the existing pathname === "/lobbies" /
endsWith("/lobbies") logic so mismatched worker-prefixed requests can’t be
accepted by the wrong WebSocketServer.
src/server/Worker.ts
Outdated
| // Periodically write this worker's public games to Redis | ||
| startGameInfoPolling(gm); | ||
|
|
||
| async function startGameInfoPolling(gameManager: GameManager) { | ||
| startPolling(async () => { | ||
| try { | ||
| const publicLobbyIDs = await getPublicLobbyIDs(); | ||
|
|
||
| for (const gameID of publicLobbyIDs) { | ||
| // Only update games that belong to this worker | ||
| if (config.workerIndex(gameID) !== workerId) continue; | ||
|
|
||
| const game = gameManager.game(gameID); | ||
| if (game === null) { | ||
| // Game no longer exists, clean up | ||
| await removeLobbyInfo(gameID); | ||
| await removePublicLobbyID(gameID); | ||
| continue; | ||
| } | ||
|
|
||
| const info = game.gameInfo(); | ||
| await setLobbyInfo({ | ||
| gameID: info.gameID, | ||
| numClients: info.clients?.length ?? 0, | ||
| msUntilStart: info.msUntilStart, | ||
| gameConfig: info.gameConfig, | ||
| updatedAt: Date.now(), | ||
| }); | ||
|
|
||
| // Remove from public lobbies if game started or is full | ||
| if (info.msUntilStart !== undefined && info.msUntilStart <= 250) { | ||
| await removePublicLobbyID(gameID); | ||
| await removeLobbyInfo(gameID); | ||
| } | ||
| } |
There was a problem hiding this comment.
Remove full lobbies from Redis set to avoid scheduler stall.
Full lobbies are filtered out for clients, but the public lobby ID remains in Redis, so the master may stop scheduling even when no joinable lobbies exist. Remove IDs when the lobby is full.
🛠️ Suggested fix
const info = game.gameInfo();
- await setLobbyInfo({
- gameID: info.gameID,
- numClients: info.clients?.length ?? 0,
- msUntilStart: info.msUntilStart,
- gameConfig: info.gameConfig,
- updatedAt: Date.now(),
- });
+ const numClients = info.clients?.length ?? 0;
+ const maxPlayers = info.gameConfig?.maxPlayers;
+ const isFull =
+ maxPlayers !== undefined && numClients >= maxPlayers;
+
+ await setLobbyInfo({
+ gameID: info.gameID,
+ numClients,
+ msUntilStart: info.msUntilStart,
+ gameConfig: info.gameConfig,
+ updatedAt: Date.now(),
+ });
// Remove from public lobbies if game started or is full
- if (info.msUntilStart !== undefined && info.msUntilStart <= 250) {
+ if (
+ (info.msUntilStart !== undefined && info.msUntilStart <= 250) ||
+ isFull
+ ) {
await removePublicLobbyID(gameID);
await removeLobbyInfo(gameID);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Periodically write this worker's public games to Redis | |
| startGameInfoPolling(gm); | |
| async function startGameInfoPolling(gameManager: GameManager) { | |
| startPolling(async () => { | |
| try { | |
| const publicLobbyIDs = await getPublicLobbyIDs(); | |
| for (const gameID of publicLobbyIDs) { | |
| // Only update games that belong to this worker | |
| if (config.workerIndex(gameID) !== workerId) continue; | |
| const game = gameManager.game(gameID); | |
| if (game === null) { | |
| // Game no longer exists, clean up | |
| await removeLobbyInfo(gameID); | |
| await removePublicLobbyID(gameID); | |
| continue; | |
| } | |
| const info = game.gameInfo(); | |
| await setLobbyInfo({ | |
| gameID: info.gameID, | |
| numClients: info.clients?.length ?? 0, | |
| msUntilStart: info.msUntilStart, | |
| gameConfig: info.gameConfig, | |
| updatedAt: Date.now(), | |
| }); | |
| // Remove from public lobbies if game started or is full | |
| if (info.msUntilStart !== undefined && info.msUntilStart <= 250) { | |
| await removePublicLobbyID(gameID); | |
| await removeLobbyInfo(gameID); | |
| } | |
| } | |
| // Periodically write this worker's public games to Redis | |
| startGameInfoPolling(gm); | |
| async function startGameInfoPolling(gameManager: GameManager) { | |
| startPolling(async () => { | |
| try { | |
| const publicLobbyIDs = await getPublicLobbyIDs(); | |
| for (const gameID of publicLobbyIDs) { | |
| // Only update games that belong to this worker | |
| if (config.workerIndex(gameID) !== workerId) continue; | |
| const game = gameManager.game(gameID); | |
| if (game === null) { | |
| // Game no longer exists, clean up | |
| await removeLobbyInfo(gameID); | |
| await removePublicLobbyID(gameID); | |
| continue; | |
| } | |
| const info = game.gameInfo(); | |
| const numClients = info.clients?.length ?? 0; | |
| const maxPlayers = info.gameConfig?.maxPlayers; | |
| const isFull = | |
| maxPlayers !== undefined && numClients >= maxPlayers; | |
| await setLobbyInfo({ | |
| gameID: info.gameID, | |
| numClients, | |
| msUntilStart: info.msUntilStart, | |
| gameConfig: info.gameConfig, | |
| updatedAt: Date.now(), | |
| }); | |
| // Remove from public lobbies if game started or is full | |
| if ( | |
| (info.msUntilStart !== undefined && info.msUntilStart <= 250) || | |
| isFull | |
| ) { | |
| await removePublicLobbyID(gameID); | |
| await removeLobbyInfo(gameID); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@src/server/Worker.ts` around lines 172 - 206, startGameInfoPolling is not
removing full lobbies from Redis, causing the master to keep stale public IDs;
after obtaining info = game.gameInfo() check whether the lobby is full (e.g.,
info.clients?.length >= info.gameConfig?.maxPlayers or other capacity field on
gameConfig) and if so call removePublicLobbyID(gameID) and
removeLobbyInfo(gameID) (same as the existing removal for started lobbies).
Update the polling loop in startGameInfoPolling / startPolling right after
setLobbyInfo to detect full lobbies and remove their IDs so Redis no longer
advertises joinable lobbies from this worker.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Dockerfile (1)
10-11: Pipeline failure: missing build tools for redis-memory-serverThe
npm cistep fails becauseredis-memory-server(devDependency) compiles Redis from source during postinstall. The build stage lacksmakeand other build tools.Add
build-essentialto the build stage before runningnpm ci:🔧 Proposed fix
# Build stage - install ALL dependencies and build FROM base AS build ENV HUSKY=0 +# Install build tools needed for native dependencies +RUN apt-get update && apt-get install -y build-essential && rm -rf /var/lib/apt/lists/* # Copy package files first for better caching COPY package*.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci
🤖 Fix all issues with AI agents
In `@package.json`:
- Line 78: The package.json dependency for "redis-memory-server" is set to an
invalid version ^0.15.0; update the entry for "redis-memory-server" to the
latest available 0.14.1 (e.g., change "redis-memory-server": "^0.15.0" to
"redis-memory-server": "^0.14.1") so npm/yarn can resolve it while keeping
"ioredis" ^5.9.2 unchanged.
In `@src/client/LobbySocket.ts`:
- Line 1: The start() method in LobbySocket currently calls
getServerConfigFromClient and lets errors propagate, preventing lobby
connections when /api/env fails; update start() (and similar calls around the
40-45 region) to catch errors from getServerConfigFromClient, log the error via
the existing logger, and fall back to a sensible default config/path (e.g.,
default server path or empty config) so lobby updates still start; ensure the
catch only supplies the fallback and does not suppress other unrelated errors.
- Around line 12-15: getRandomWorkerPath can produce "/wNaN" when numWorkers is
0 or invalid; update the function (getRandomWorkerPath) to validate numWorkers
(e.g., if not a finite positive integer) and use a safe fallback index (such as
0) or normalize with Math.max(1, Math.floor(...)) so the returned path is always
a valid "/w{n}" string; ensure the same guard/normalization is applied to any
other callers that compute workerIndex in this module.
In `@src/server/RedisClient.ts`:
- Around line 39-60: The current production branch sets redisReady =
Promise.resolve() immediately, which can let callers proceed before the Redis
client actually connects; change this to set redisReady to a Promise that
resolves when the Redis client's "connect" event fires and rejects (or rejects
and logs) on an "error" (and optionally "end") event so callers await a real
connection; update the block where redis is created (the Redis(...) constructor
and the redis.on handlers) to create this promise, attach resolve in
redis.on("connect") and reject in redis.on("error"), and ensure existing
log.info/log.error calls remain but no longer prematurely mark readiness.
In `@src/server/Worker.ts`:
- Line 33: The WebSocket upgrade handler currently only checks
pathname.endsWith("/lobbies") and thus skips validating the /w{workerId} prefix
that the HTTP middleware enforces; update the upgrade handling logic (where
pathname is checked, near the code that uses WorkerLobbyService) to parse and
strip a leading /w{workerId} segment and verify it equals this.id (e.g., compare
`/w${this.id}`), returning/ignoring the upgrade if it mismatches, so WebSocket
routing matches the HTTP middleware behavior.
🧹 Nitpick comments (7)
src/server/RedisClient.ts (3)
154-161: Side effect in read operation:getAllLobbyInfodeletes stale entriesThis function both reads and deletes data. This can cause issues:
- Unexpected behavior for callers expecting a pure read
- Multiple concurrent calls may cause race conditions
- Harder to reason about and test
Consider separating cleanup into a dedicated function or polling loop.
♻️ Suggested approach
export async function getAllLobbyInfo(): Promise<LobbyInfo[]> { await redisReady; const allInfo = await redis.hgetall(LOBBY_INFO_KEY); const lobbies: LobbyInfo[] = []; const now = Date.now(); + const staleKeys: string[] = []; for (const [gameID, value] of Object.entries(allInfo)) { try { const info = JSON.parse(value) as LobbyInfo; // Skip stale entries (older than 5 seconds) if (now - info.updatedAt > 5000) { - await redis.hdel(LOBBY_INFO_KEY, gameID); + staleKeys.push(gameID); continue; } lobbies.push(info); } catch (e) { log.error(`Failed to parse lobby info ${gameID}:`, e); + staleKeys.push(gameID); } } + // Clean up stale entries in background (fire-and-forget) + if (staleKeys.length > 0) { + redis.hdel(LOBBY_INFO_KEY, ...staleKeys).catch((err) => { + log.error("Failed to clean up stale lobby entries:", err); + }); + } + return lobbies; }
12-38: Consider extracting shared Redis configuration to reduce duplicationThe Redis client options and event handlers are duplicated between dev and prod modes. Extract common logic:
♻️ Example refactor
function createRedisClient(host: string, port: number): Redis { const client = new Redis({ host, port, maxRetriesPerRequest: 3, retryStrategy(times) { return Math.min(times * 50, 2000); }, }); client.on("error", (err) => { log.error("Redis connection error:", err); }); client.on("connect", () => { log.info(`Connected to Redis at ${host}:${port}`); }); return client; }
130-136: Use typedGameConfiginstead ofunknownforgameConfigThe
LobbyInfointerface usesgameConfig?: unknown, butGameConfigis already imported and used elsewhere. For type safety:export interface LobbyInfo { gameID: string; numClients: number; msUntilStart?: number; - gameConfig?: unknown; + gameConfig?: GameConfig; updatedAt: number; }src/server/Master.ts (1)
176-188: Consider cleanup on schedule failureIf
addPendingGamesucceeds butaddPublicLobbyIDfails, you'll have an orphan pending game in Redis. Consider cleanup:♻️ Suggested fix
async function schedulePublicGame() { const gameID = generateID(); try { const gameConfig = await playlist.gameConfig(); await addPendingGame(gameID, gameConfig); - await addPublicLobbyID(gameID); - log.info(`Scheduled public game ${gameID} via Redis`); + try { + await addPublicLobbyID(gameID); + log.info(`Scheduled public game ${gameID} via Redis`); + } catch (error) { + // Rollback pending game if public lobby ID fails + await removePendingGame(gameID).catch(() => {}); + throw error; + } } catch (error) { log.error(`Failed to schedule public game ${gameID}:`, error); throw error; } }You'll need to import
removePendingGamefrom./RedisClient.src/server/WorkerLobbyService.ts (3)
52-65: URL path matching may be too broadThe check
pathname.endsWith("/lobbies")would match paths like/foo/lobbiesor/api/lobbies. If those paths should go togameWss, this could cause issues.♻️ Suggested fix for stricter matching
private setupUpgradeHandler() { this.server.on("upgrade", (request, socket, head) => { const pathname = request.url ?? ""; - if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) { + // Match exactly "/lobbies" or "/<worker>/lobbies" pattern + if (pathname === "/lobbies" || /^\/\d+\/lobbies$/.test(pathname)) { this.lobbiesWss.handleUpgrade(request, socket, head, (ws) => { this.lobbiesWss.emit("connection", ws, request); }); } else {
144-151: Extract magic number and improve type handlingThe
250ms threshold appears in multiple places. Also, castinginfo.gameConfigto a partial object is fragile.♻️ Suggested improvements
// At module level or in a constants file const GAME_START_THRESHOLD_MS = 250; // In the filter: if (info.msUntilStart !== undefined && info.msUntilStart <= GAME_START_THRESHOLD_MS) return false; // For gameConfig, consider using the GameConfig type with optional fields // or use a type guard function
182-186: Duplicate cleanup logic for lobby removalLobby cleanup happens in two places:
- Lines 182-186: When
game === null- Lines 199-202: When
msUntilStart <= 250Consider extracting to a helper:
♻️ Example helper
private async cleanupLobby(gameID: string): Promise<void> { await removeLobbyInfo(gameID); await removePublicLobbyID(gameID); }Also applies to: 199-202
| function getRandomWorkerPath(numWorkers: number): string { | ||
| const workerIndex = Math.floor(Math.random() * numWorkers); | ||
| return `/w${workerIndex}`; | ||
| } |
There was a problem hiding this comment.
Guard against numWorkers <= 0 to avoid /wNaN.
If numWorkers() is 0 or invalid, the path becomes /wNaN and both WS + HTTP fail. Add a safe fallback.
✅ Suggested fix
function getRandomWorkerPath(numWorkers: number): string {
- const workerIndex = Math.floor(Math.random() * numWorkers);
+ const safeCount =
+ Number.isFinite(numWorkers) && numWorkers > 0 ? Math.floor(numWorkers) : 1;
+ const workerIndex = Math.floor(Math.random() * safeCount);
return `/w${workerIndex}`;
}Also applies to: 23-23
🤖 Prompt for AI Agents
In `@src/client/LobbySocket.ts` around lines 12 - 15, getRandomWorkerPath can
produce "/wNaN" when numWorkers is 0 or invalid; update the function
(getRandomWorkerPath) to validate numWorkers (e.g., if not a finite positive
integer) and use a safe fallback index (such as 0) or normalize with Math.max(1,
Math.floor(...)) so the returned path is always a valid "/w{n}" string; ensure
the same guard/normalization is applied to any other callers that compute
workerIndex in this module.
src/server/RedisClient.ts
Outdated
| } else { | ||
| // In production, connect to real Redis | ||
| redis = new Redis({ | ||
| host: "127.0.0.1", | ||
| port: 6379, | ||
| maxRetriesPerRequest: 3, | ||
| retryStrategy(times) { | ||
| const delay = Math.min(times * 50, 2000); | ||
| return delay; | ||
| }, | ||
| }); | ||
|
|
||
| redis.on("error", (err) => { | ||
| log.error("Redis connection error:", err); | ||
| }); | ||
|
|
||
| redis.on("connect", () => { | ||
| log.info("Connected to Redis"); | ||
| }); | ||
|
|
||
| redisReady = Promise.resolve(); | ||
| } |
There was a problem hiding this comment.
Production Redis: redisReady resolves before connection is established
In production mode, redisReady = Promise.resolve() is set immediately, but the Redis connection happens asynchronously. If any Redis operation is called before the connect event fires, it may fail or behave unexpectedly.
Consider waiting for the actual connection:
🔧 Proposed fix
} else {
// In production, connect to real Redis
- redis = new Redis({
+ const prodRedis = new Redis({
host: "127.0.0.1",
port: 6379,
maxRetriesPerRequest: 3,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay;
},
});
+ redis = prodRedis;
- redis.on("error", (err) => {
+ prodRedis.on("error", (err) => {
log.error("Redis connection error:", err);
});
- redis.on("connect", () => {
+ prodRedis.on("connect", () => {
log.info("Connected to Redis");
});
- redisReady = Promise.resolve();
+ redisReady = new Promise<void>((resolve) => {
+ prodRedis.once("ready", () => {
+ resolve();
+ });
+ });
}🤖 Prompt for AI Agents
In `@src/server/RedisClient.ts` around lines 39 - 60, The current production
branch sets redisReady = Promise.resolve() immediately, which can let callers
proceed before the Redis client actually connects; change this to set redisReady
to a Promise that resolves when the Redis client's "connect" event fires and
rejects (or rejects and logs) on an "error" (and optionally "end") event so
callers await a real connection; update the block where redis is created (the
Redis(...) constructor and the redis.on handlers) to create this promise, attach
resolve in redis.on("connect") and reject in redis.on("error"), and ensure
existing log.info/log.error calls remain but no longer prematurely mark
readiness.
eed0c4f to
72b7f28
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/server/Master.ts (1)
144-148: Missing Redis connection info in worker restart.Restarted workers don't receive
REDIS_HOSTandREDIS_PORTenvironment variables. In dev mode withredis-memory-server(dynamic port), restarted workers will fail to connect to Redis.🐛 Suggested fix
// Restart the worker with the same ID const newWorker = cluster.fork({ WORKER_ID: workerId, ADMIN_TOKEN, INSTANCE_ID, + REDIS_HOST: process.env.REDIS_HOST, + REDIS_PORT: process.env.REDIS_PORT, });
🤖 Fix all issues with AI agents
In `@src/server/RedisClient.ts`:
- Around line 46-70: redisReady is being resolved immediately for workers
instead of waiting for the actual Redis connection; change the worker branch in
RedisClient.ts to create redisReady as a Promise that resolves when the Redis
client emits a successful connection/ready event (use redis.once("connect" or
"ready") to resolve) and rejects on "error" so callers wait for a real
connection; ensure you remove the immediate Promise.resolve() and tie resolution
to the same event handlers used for logging (redis.on/once) to avoid race
conditions.
In `@src/server/WorkerLobbyService.ts`:
- Around line 52-65: The upgrade handler in setupUpgradeHandler currently routes
solely on the "/lobbies" suffix and ignores a leading worker prefix (e.g.
"/w2"), allowing cross-worker connections; update the server.on("upgrade") logic
in setupUpgradeHandler to parse and validate an optional "/w<N>" prefix from
request.url (extract N with a regex like ^/w(\d+)(/|$)), compare it to this
worker's id (use the existing worker id field, e.g. this.workerId), and if the
prefix exists and does not match return an HTTP 404/close the socket
(destroy/end) instead of proceeding; if it matches or no prefix is present,
strip the prefix before checking for "/lobbies" and continue using
lobbiesWss.handleUpgrade or gameWss.handleUpgrade as before.
♻️ Duplicate comments (7)
src/server/RedisClient.ts (4)
162-168: UseGameConfiginstead ofunknownfor type safety.
GameConfigis already imported at line 3. Usingunknownforces unsafe type assertions at call sites.♻️ Suggested fix
export interface LobbyInfo { gameID: string; numClients: number; msUntilStart?: number; - gameConfig?: unknown; + gameConfig?: GameConfig; updatedAt: number; }
71-92: Production:redisReadyresolves before connection is established.In production mode,
redisReady = Promise.resolve()completes immediately, but Redis connection is async. Operations called before theconnectevent may fail.🛠️ Suggested fix
redis.on("connect", () => { log.info("Connected to Redis"); }); - redisReady = Promise.resolve(); + redisReady = new Promise<void>((resolve) => { + redis.once("ready", () => { + resolve(); + }); + }); }
123-130: Clean up corrupted pending-game records on parse failure.If JSON parsing fails, the bad entry stays in Redis forever and keeps logging errors. Delete it after logging.
🛠️ Suggested fix
try { games.push(JSON.parse(value) as PendingGame); } catch (e) { log.error(`Failed to parse pending game ${gameID}:`, e); + await redis.hdel(PENDING_GAMES_KEY, gameID); }
195-197: Clean up corrupted lobby-info records on parse failure.Stale entries get cleaned up (lines 190-192), but parse failures leave bad JSON in Redis forever.
🛠️ Suggested fix
} catch (e) { log.error(`Failed to parse lobby info ${gameID}:`, e); + await redis.hdel(LOBBY_INFO_KEY, gameID); }src/client/LobbySocket.ts (2)
12-15: Guard against invalidnumWorkersto avoid/wNaNpaths.If
numWorkers()returns 0, NaN, or a negative number,Math.floor(Math.random() * numWorkers)produces NaN or invalid indices. Add a fallback.🛠️ Suggested fix
function getRandomWorkerPath(numWorkers: number): string { + if (!Number.isFinite(numWorkers) || numWorkers < 1) { + console.warn("Invalid numWorkers, defaulting to /w0"); + return "/w0"; + } const workerIndex = Math.floor(Math.random() * numWorkers); return `/w${workerIndex}`; }
40-45: Handle config fetch errors so lobby updates still start.If
/api/envfails,start()throws beforeconnectWebSocket()runs, leaving the lobby completely dead. Wrap in try/catch and fall back to a default worker path.🛠️ Suggested fix
async start() { this.wsConnectionAttempts = 0; - // Get config to determine number of workers, then pick a random one - const config = await getServerConfigFromClient(); - this.workerPath = getRandomWorkerPath(config.numWorkers()); + try { + // Get config to determine number of workers, then pick a random one + const config = await getServerConfigFromClient(); + this.workerPath = getRandomWorkerPath(config.numWorkers()); + } catch (error) { + console.error("Failed to load server config, defaulting to /w0:", error); + this.workerPath = "/w0"; + } this.connectWebSocket(); }src/server/WorkerLobbyService.ts (1)
172-208: Full lobbies are not removed from the public lobby set.The filter in
startLobbyPollinghides full lobbies from clients, butstartGameInfoPollingonly removes lobbies whenmsUntilStart <= 250. Full lobbies remain in Redis, which can cause the master scheduler to think public lobbies exist when none are joinable.This was flagged in a previous review and appears unaddressed.
Suggested fix
const info = game.gameInfo(); + const numClients = info.clients?.length ?? 0; + const maxPlayers = (info.gameConfig as { maxPlayers?: number }) + ?.maxPlayers; + const isFull = + maxPlayers !== undefined && numClients >= maxPlayers; + await setLobbyInfo({ gameID: info.gameID, - numClients: info.clients?.length ?? 0, + numClients, msUntilStart: info.msUntilStart, gameConfig: info.gameConfig, updatedAt: Date.now(), }); // Remove from public lobbies if game started or is full - if (info.msUntilStart !== undefined && info.msUntilStart <= 250) { + if ( + (info.msUntilStart !== undefined && info.msUntilStart <= 250) || + isFull + ) { await removePublicLobbyID(gameID); await removeLobbyInfo(gameID); }
🧹 Nitpick comments (3)
src/server/WorkerLobbyService.ts (1)
136-170: Type assertion may hide mismatches.Line 161 uses
as GameInfowhich bypasses type checking. IfGameInfohas required fields not present in the mapped object, runtime errors may occur silently.Safer approach
.map( (info) => - ({ + { gameID: info.gameID, numClients: info.numClients, gameConfig: info.gameConfig, msUntilStart: info.msUntilStart, - }) as GameInfo, + } satisfies Partial<GameInfo>, );Or define a proper mapping function that returns
GameInfowith all required fields.src/server/Worker.ts (2)
77-82: Positional arguments withundefinedplaceholder reduce clarity.Using
undefinedas a placeholder for therefreshIntervalparameter makes the call harder to read. Named parameters via an options object would be clearer, but this is minor.Consider an options object (future improvement)
If
PrivilegeRefresheris refactored later, using an options object would improve readability:// Future pattern: new PrivilegeRefresher({ endpoint: config.jwtIssuer() + "/cosmetics.json", log, env: config.env(), });
556-558: Suppressing errors in Dev may hide issues during development.Silencing matchmaking poll errors in Dev environment means developers won't see network failures or misconfigurations. Consider logging at a lower level (debug/warn) instead of suppressing entirely.
Alternative: log at debug level in Dev
} catch (error) { - if (config.env() !== GameEnv.Dev) { - log.error(`Error polling matchmaking:`, error); - } + const level = config.env() === GameEnv.Dev ? "debug" : "error"; + log[level](`Error polling matchmaking:`, error); }
src/server/RedisClient.ts
Outdated
| } else { | ||
| // Worker: connect to Redis using env vars set by master | ||
| const host = process.env.REDIS_HOST ?? "127.0.0.1"; | ||
| const port = parseInt(process.env.REDIS_PORT ?? "6379"); | ||
|
|
||
| redis = new Redis({ | ||
| host, | ||
| port, | ||
| maxRetriesPerRequest: 3, | ||
| retryStrategy(times) { | ||
| const delay = Math.min(times * 50, 2000); | ||
| return delay; | ||
| }, | ||
| }); | ||
|
|
||
| redis.on("error", (err) => { | ||
| log.error("Redis connection error:", err); | ||
| }); | ||
|
|
||
| redis.on("connect", () => { | ||
| log.info(`Worker connected to Redis memory server at ${host}:${port}`); | ||
| }); | ||
|
|
||
| redisReady = Promise.resolve(); | ||
| } |
There was a problem hiding this comment.
Dev worker: redisReady also resolves before connection.
Same issue as production - the worker's redisReady resolves immediately at line 69 while the actual Redis connection happens asynchronously.
🛠️ Suggested fix
redis.on("connect", () => {
log.info(`Worker connected to Redis memory server at ${host}:${port}`);
});
- redisReady = Promise.resolve();
+ redisReady = new Promise<void>((resolve) => {
+ redis.once("ready", () => {
+ resolve();
+ });
+ });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else { | |
| // Worker: connect to Redis using env vars set by master | |
| const host = process.env.REDIS_HOST ?? "127.0.0.1"; | |
| const port = parseInt(process.env.REDIS_PORT ?? "6379"); | |
| redis = new Redis({ | |
| host, | |
| port, | |
| maxRetriesPerRequest: 3, | |
| retryStrategy(times) { | |
| const delay = Math.min(times * 50, 2000); | |
| return delay; | |
| }, | |
| }); | |
| redis.on("error", (err) => { | |
| log.error("Redis connection error:", err); | |
| }); | |
| redis.on("connect", () => { | |
| log.info(`Worker connected to Redis memory server at ${host}:${port}`); | |
| }); | |
| redisReady = Promise.resolve(); | |
| } | |
| } else { | |
| // Worker: connect to Redis using env vars set by master | |
| const host = process.env.REDIS_HOST ?? "127.0.0.1"; | |
| const port = parseInt(process.env.REDIS_PORT ?? "6379"); | |
| redis = new Redis({ | |
| host, | |
| port, | |
| maxRetriesPerRequest: 3, | |
| retryStrategy(times) { | |
| const delay = Math.min(times * 50, 2000); | |
| return delay; | |
| }, | |
| }); | |
| redis.on("error", (err) => { | |
| log.error("Redis connection error:", err); | |
| }); | |
| redis.on("connect", () => { | |
| log.info(`Worker connected to Redis memory server at ${host}:${port}`); | |
| }); | |
| redisReady = new Promise<void>((resolve) => { | |
| redis.once("ready", () => { | |
| resolve(); | |
| }); | |
| }); | |
| } |
🤖 Prompt for AI Agents
In `@src/server/RedisClient.ts` around lines 46 - 70, redisReady is being resolved
immediately for workers instead of waiting for the actual Redis connection;
change the worker branch in RedisClient.ts to create redisReady as a Promise
that resolves when the Redis client emits a successful connection/ready event
(use redis.once("connect" or "ready") to resolve) and rejects on "error" so
callers wait for a real connection; ensure you remove the immediate
Promise.resolve() and tie resolution to the same event handlers used for logging
(redis.on/once) to avoid race conditions.
72b7f28 to
fbfd09c
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/server/Master.ts (1)
144-148: Restarted workers miss Redis connection info.When a worker crashes and restarts, the
cluster.fork()call does not passREDIS_HOSTandREDIS_PORT. The new worker will fall back to defaults (127.0.0.1:6379), which breaks in dev mode where Redis memory server runs on a different port.🐛 Suggested fix
// Restart the worker with the same ID const newWorker = cluster.fork({ WORKER_ID: workerId, ADMIN_TOKEN, INSTANCE_ID, + REDIS_HOST: redisInfo.host, + REDIS_PORT: String(redisInfo.port), });Note: You need to capture
redisInfoin a scope accessible to theexithandler, or re-read fromprocess.env.
♻️ Duplicate comments (7)
src/server/RedisClient.ts (5)
162-168: UseGameConfiginstead ofunknownforgameConfig.
GameConfigis already imported at line 3. Usingunknownforces type assertions at call sites and hides type errors.♻️ Suggested fix
export interface LobbyInfo { gameID: string; numClients: number; msUntilStart?: number; - gameConfig?: unknown; + gameConfig?: GameConfig; updatedAt: number; }
46-70: Dev worker:redisReadyresolves before connection is established.At line 69,
redisReady = Promise.resolve()resolves immediately, but Redis connection happens asynchronously. Any Redis operation called before theconnectevent fires may fail.🔧 Suggested fix
redis.on("connect", () => { log.info(`Worker connected to Redis memory server at ${host}:${port}`); }); - redisReady = Promise.resolve(); + redisReady = new Promise<void>((resolve) => { + redis.once("ready", () => { + resolve(); + }); + }); }
71-92: Production Redis:redisReadyresolves before connection is established.Same issue as dev worker. Line 91 resolves immediately, but
redis.on("connect")fires later. Early Redis calls may fail or behave unexpectedly.🔧 Suggested fix
redis.on("connect", () => { log.info("Connected to Redis"); }); - redisReady = Promise.resolve(); + redisReady = new Promise<void>((resolve) => { + redis.once("ready", () => { + resolve(); + }); + }); }
123-131: Clean up corrupted pending-game records on parse failure.If JSON parsing fails, the bad entry stays in Redis forever and keeps logging errors on every poll. Delete it after logging.
🐛 Suggested fix
try { games.push(JSON.parse(value) as PendingGame); } catch (e) { log.error(`Failed to parse pending game ${gameID}:`, e); + await redis.hdel(PENDING_GAMES_KEY, gameID); }
195-197: Clean up corrupted lobby-info records on parse failure.Same issue as pending games. Bad JSON stays in Redis and logs repeatedly. Delete it on failure.
🐛 Suggested fix
} catch (e) { log.error(`Failed to parse lobby info ${gameID}:`, e); + await redis.hdel(LOBBY_INFO_KEY, gameID); }src/server/WorkerLobbyService.ts (2)
52-65: WebSocket upgrade handler lacks worker prefix validation.The upgrade handler routes based on
/lobbiespath but does not validate the/wXworker prefix. HTTP middleware in Worker.ts validates worker ID and returns 404 on mismatch, but WebSocket upgrades bypass this. A client connecting to/w2/lobbieswill be accepted even if this is worker 0.🔧 Suggested fix
private setupUpgradeHandler() { this.server.on("upgrade", (request, socket, head) => { const pathname = request.url ?? ""; + + // Validate worker prefix (matches HTTP middleware behavior) + const match = pathname.match(/^\/w(\d+)(\/.*)?$/); + if (match && parseInt(match[1]) !== this.workerId) { + socket.destroy(); + return; + } + if (pathname === "/lobbies" || pathname.endsWith("/lobbies")) { this.lobbiesWss.handleUpgrade(request, socket, head, (ws) => { this.lobbiesWss.emit("connection", ws, request); }); } else { this.gameWss.handleUpgrade(request, socket, head, (ws) => { this.gameWss.emit("connection", ws, request); }); } }); }
188-202: Full lobbies not removed from Redis, causing scheduler stall.When a lobby reaches max capacity, it's filtered out for clients in
startLobbyPolling(lines 147-151), but its ID stays inpublic_lobby_ids. The master checksgetPublicLobbyIDs()and sees a lobby exists, so it won't schedule new games. Players see no joinable lobbies.🔧 Suggested fix
const info = game.gameInfo(); + const numClients = info.clients?.length ?? 0; + const maxPlayers = (info.gameConfig as { maxPlayers?: number })?.maxPlayers; + const isFull = maxPlayers !== undefined && numClients >= maxPlayers; + await setLobbyInfo({ gameID: info.gameID, - numClients: info.clients?.length ?? 0, + numClients, msUntilStart: info.msUntilStart, gameConfig: info.gameConfig, updatedAt: Date.now(), }); // Remove from public lobbies if game started or is full - if (info.msUntilStart !== undefined && info.msUntilStart <= 250) { + if ( + (info.msUntilStart !== undefined && info.msUntilStart <= 250) || + isFull + ) { await removePublicLobbyID(gameID); await removeLobbyInfo(gameID); }
🧹 Nitpick comments (2)
Dockerfile (1)
46-46: Adding redis-server supports the new distributed lobby system.This is fine for development and single-container deployments. For production at scale, consider using a managed Redis service or a separate Redis container for better reliability and monitoring.
src/server/WorkerLobbyService.ts (1)
143-152: Type cast needed due tounknownin LobbyInfo.The cast
info.gameConfig as { maxPlayers?: number }at line 146 is a workaround forLobbyInfo.gameConfigbeing typed asunknown. Once that interface usesGameConfig, this cast can be removed for better type safety.
fbfd09c to
4c50428
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/client/LobbySocket.test.ts (1)
52-111: Fix async start and timer advance to stop CI failures.
start()is async, so the WebSocket may not exist yet, and polling timers may not fire without async timer advance.🔧 Suggested test fix
- it("delivers lobby updates from websocket messages", () => { + it("delivers lobby updates from websocket messages", async () => { const updates: LobbyUpdateData[] = []; const socket = new PublicLobbySocket((data) => updates.push(data)); - socket.start(); + await socket.start(); + await Promise.resolve(); const ws = MockWebSocket.instances.at(-1); + expect(ws).toBeTruthy(); expect(ws?.url).toContain("/lobbies"); @@ - it("falls back to HTTP polling after max websocket attempts", async () => { + it("falls back to HTTP polling after max websocket attempts", async () => { vi.useFakeTimers(); @@ - socket.start(); + await socket.start(); const ws = MockWebSocket.instances.at(-1); ws?.dispatchEvent(new CloseEvent("close")); await Promise.resolve(); expect(fetchMock).toHaveBeenCalledTimes(1); - vi.advanceTimersByTime(60); + await vi.advanceTimersByTimeAsync(60); await Promise.resolve(); expect(fetchMock).toHaveBeenCalledTimes(2);
🤖 Fix all issues with AI agents
In `@src/client/LobbySocket.ts`:
- Around line 181-190: The HTTP fallback in fetchLobbiesHTTP is invalid because
/api/public_lobbies doesn't exist; either remove the entire fetch fallback in
fetchLobbiesHTTP (delete the fetch to `${this.workerPath}/api/public_lobbies`,
the response.ok check, JSON parsing, and the onLobbiesUpdate call) and rely on
the existing WebSocket /lobbies flow (invoke or await whatever WebSocket
request/handler currently populates lobbies), or implement the missing server
HTTP endpoint and correct the URL to include a leading slash
(`/${this.workerPath}/api/public_lobbies`) so the fetch works; reference:
fetchLobbiesHTTP, workerPath, onLobbiesUpdate, and the WebSocket /lobbies
endpoint.
In `@src/client/PublicLobby.ts`:
- Around line 51-63: handleLobbiesUpdate computes serverTimeOffset from
this.lobbies[0].serverTime which can be undefined and produce NaN; fix by
locating a lobby with a valid numeric serverTime (e.g., find first l where
typeof l.serverTime === 'number' && isFinite(l.serverTime')) and only compute
this.serverTimeOffset = found.serverTime - Date.now(), otherwise default
this.serverTimeOffset to 0; also ensure when computing startsAt you consistently
use a numeric fallback (e.g., Date.now()) before subtracting
this.serverTimeOffset so lobbyIDToStart.set(l.gameID, (startsAt ?? Date.now()) -
this.serverTimeOffset) always uses finite numbers.
In `@src/server/MasterLobbyService.ts`:
- Around line 63-72: handleWorkerReady currently calls startPolling every time
the last worker becomes ready, which can start duplicate schedulers after worker
restarts; add a one‑time guard boolean (e.g., this.schedulingStarted) on the
MasterLobbyService class, initialize it false, check it in handleWorkerReady
before calling startPolling and set it true immediately after starting polling
so subsequent calls to handleWorkerReady (or worker restarts) do not start
additional loops; reference the existing methods/fields: handleWorkerReady,
readyWorkers, startPolling, tick, and config.numWorkers().
- Around line 74-78: The getAllLobbies method currently uses an implicit-return
arrow with an assignment inside forEach; change it to compute a single timestamp
(e.g., const now = Date.now()) and use a block-bodied forEach to set each
GameInfo.serverTime to that extracted now value so all lobbies share the same
timestamp and the update uses a more idiomatic block statement; adjust
references to this.workerLobbies, lobbies, and serverTime inside getAllLobbies
accordingly.
src/server/MasterLobbyService.ts
Outdated
| private getAllLobbies(): GameInfo[] { | ||
| const lobbies = Array.from(this.workerLobbies.values()).flat(); | ||
| lobbies.forEach((g) => (g.serverTime = Date.now())); | ||
| return lobbies; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the file and check its content
fd MasterLobbyService.ts --type fRepository: openfrontio/OpenFrontIO
Length of output: 99
🏁 Script executed:
# If found, examine lines around 74-78
rg -n "getAllLobbies" src/server/MasterLobbyService.ts -A 5 -B 2Repository: openfrontio/OpenFrontIO
Length of output: 536
🏁 Script executed:
# Check for Biome configuration
fd -e "biome.json" -e "biome.jsonc" -e ".biomeignore"Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Search for Biome rules or config references
rg "biome" --type json --type toml --type jsoncRepository: openfrontio/OpenFrontIO
Length of output: 96
🏁 Script executed:
# Check if there are any ESLint or Biome rule references
rg "forEach" src/server/MasterLobbyService.ts -B 3 -A 3Repository: openfrontio/OpenFrontIO
Length of output: 257
🏁 Script executed:
# Search for linting config in common locations
ls -la | head -20Repository: openfrontio/OpenFrontIO
Length of output: 1252
🏁 Script executed:
# Check package.json for linting setup
rg '"biome"' --type json -A 5 -B 5Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Look for any ESLint or linting configuration
fd -e "eslint*" -e "tsconfig*" -e "prettier*"Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Read package.json to see what linting tools are used
cat package.json | head -80Repository: openfrontio/OpenFrontIO
Length of output: 2997
🏁 Script executed:
# Search for Biome in package.json more carefully
cat package.json | grep -i biomeRepository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Find ESLint configuration
fd -e "eslint.config*" -e ".eslintrc*"Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check for ESLint config in package.json or root
cat package.json | grep -A 20 '"eslintConfig"'Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Look at the entire package.json to see if there's an eslintConfig section
wc -l package.jsonRepository: openfrontio/OpenFrontIO
Length of output: 83
🏁 Script executed:
# Read the full package.json to find ESLint config
tail -100 package.jsonRepository: openfrontio/OpenFrontIO
Length of output: 3062
🏁 Script executed:
# Search for ESLint config file - may be in different location
find . -maxdepth 2 -name "eslint.config.*" -o -name ".eslintrc*" 2>/dev/nullRepository: openfrontio/OpenFrontIO
Length of output: 85
🏁 Script executed:
# Check if there's an eslint config file by looking at common patterns
ls -la | grep eslintRepository: openfrontio/OpenFrontIO
Length of output: 135
🏁 Script executed:
# The config might be js, let's search for it
rg "eslint" . --type js --max-count 3 -lRepository: openfrontio/OpenFrontIO
Length of output: 85
🏁 Script executed:
# Read the ESLint config
cat eslint.config.jsRepository: openfrontio/OpenFrontIO
Length of output: 1754
Use a block statement and extract the timestamp for cleaner, more idiomatic TypeScript.
The arrow function with implicit return of an assignment expression is less conventional than a block statement. Extracting Date.now() ensures all lobbies receive the same timestamp value.
🧹 Suggested change
private getAllLobbies(): GameInfo[] {
const lobbies = Array.from(this.workerLobbies.values()).flat();
- lobbies.forEach((g) => (g.serverTime = Date.now()));
+ const now = Date.now();
+ lobbies.forEach((g) => {
+ g.serverTime = now;
+ });
return lobbies;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private getAllLobbies(): GameInfo[] { | |
| const lobbies = Array.from(this.workerLobbies.values()).flat(); | |
| lobbies.forEach((g) => (g.serverTime = Date.now())); | |
| return lobbies; | |
| } | |
| private getAllLobbies(): GameInfo[] { | |
| const lobbies = Array.from(this.workerLobbies.values()).flat(); | |
| const now = Date.now(); | |
| lobbies.forEach((g) => { | |
| g.serverTime = now; | |
| }); | |
| return lobbies; | |
| } |
🧰 Tools
🪛 Biome (2.3.13)
[error] 76-76: This callback passed to forEach() iterable method should not return a value.
Either remove this return or remove the returned value.
(lint/suspicious/useIterableCallbackReturn)
🤖 Prompt for AI Agents
In `@src/server/MasterLobbyService.ts` around lines 74 - 78, The getAllLobbies
method currently uses an implicit-return arrow with an assignment inside
forEach; change it to compute a single timestamp (e.g., const now = Date.now())
and use a block-bodied forEach to set each GameInfo.serverTime to that extracted
now value so all lobbies share the same timestamp and the update uses a more
idiomatic block statement; adjust references to this.workerLobbies, lobbies, and
serverTime inside getAllLobbies accordingly.
4c50428 to
3451eb8
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/client/LobbySocket.test.ts (1)
50-79:⚠️ Potential issue | 🟡 MinorAwait
start()and send a PublicGames payload.
Line 54 calls an async method withoutawait, soMockWebSocket.instances.at(-1)can be undefined, which matches the CI failure at Line 56. Also, the message payload should matchPublicGames(serverTime + games), not{ type, data }.🛠️ Proposed fix
- it("delivers lobby updates from websocket messages", () => { + it("delivers lobby updates from websocket messages", async () => { const updates: PublicGames[] = []; const socket = new PublicLobbySocket((data) => updates.push(data)); - socket.start(); + await socket.start(); const ws = MockWebSocket.instances.at(-1); expect(ws?.url).toContain("/lobbies"); ws?.dispatchEvent( new MessageEvent("message", { data: JSON.stringify({ - type: "lobbies_update", - data: [ + serverTime: Date.now(), + games: [ { gameID: "g1", numClients: 1, - serverTime: Date.now(), gameConfig: { maxPlayers: 2, gameMode: 0, gameMap: "Earth", }, }, ], }), }), );
🤖 Fix all issues with AI agents
In `@src/server/WorkerLobbyService.ts`:
- Around line 71-85: The mapping in sendMyLobbiesToMaster defaults gi.startsAt
to 0 which becomes epoch time; instead build each PublicGameInfo without forcing
startsAt to 0: only include startsAt when gi.startsAt is defined (e.g.,
conditionally add the property when gi.startsAt != null) so missing values are
omitted, and if PublicGameInfo currently requires startsAt, update its type to
make startsAt optional or allow undefined; locate sendMyLobbiesToMaster,
gm.publicLobbies(), gameInfo(), and the PublicGameInfo/WorkerLobbyList types to
apply the change and ensure the produced object still satisfies the type.
In `@tests/client/LobbySocket.test.ts`:
- Around line 87-90: The fetchMock in LobbySocket.test.ts is returning an array
(json: async () => []) which does not match the PublicGames schema; update the
mockResolvedValue for fetchMock so the json() returns a valid PublicGames-shaped
object (e.g., the expected top-level fields and an empty games list) so the
LobbySocket polling/fallback code receives the correct structure; look for the
fetchMock variable in the test and change its mockResolvedValue to return the
proper PublicGames payload.
🧹 Nitpick comments (4)
src/server/Master.ts (1)
107-114: Consider guarding against malformedWORKER_IDfrom environment.If
WORKER_IDin the environment is somehow malformed or non-numeric,parseInt(workerId)returnsNaN. This could cause unexpected behavior inlobbyService.removeWorker()andlobbyService.registerWorker().🛡️ Optional guard for parseInt
const workerIdNum = parseInt(workerId); + if (Number.isNaN(workerIdNum)) { + log.error(`Invalid WORKER_ID value: ${workerId}`); + return; + } lobbyService.removeWorker(workerIdNum);src/client/LobbySocket.ts (2)
17-18: Dead code:fallbackPollIntervalis never set.The
fallbackPollIntervalfield andstopFallbackPolling()method remain, but HTTP fallback logic was removed. This is now dead code. Consider removing or keeping only if you plan to restore fallback polling.🧹 Suggested cleanup
export class PublicLobbySocket { private ws: WebSocket | null = null; private wsReconnectTimeout: number | null = null; - private fallbackPollInterval: number | null = null; private wsConnectionAttempts = 0; private wsAttemptCounted = false; private workerPath: string = ""; private readonly reconnectDelay: number; private readonly maxWsAttempts: number; - private readonly pollIntervalMs: number; ... - private stopFallbackPolling() { - if (this.fallbackPollInterval !== null) { - clearInterval(this.fallbackPollInterval); - this.fallbackPollInterval = null; - } - }Also remove the call to
this.stopFallbackPolling()inhandleOpen()(line 79) and thepollIntervalMsoption handling.Also applies to: 152-157
109-113: Max WebSocket attempts reached: no user feedback inhandleClose.When max attempts are reached in
handleClose(), only a console error is logged. InhandleConnectError()(line 127), an alert is shown. Consider consistent user feedback in both paths.🔔 Suggested consistency fix
if (this.wsConnectionAttempts >= this.maxWsAttempts) { console.error("Max WebSocket attempts reached"); + alert("error connecting to game service"); } else { this.scheduleReconnect(); }src/client/PublicLobby.ts (1)
55-57: Redundant null check forthis.publicGames.Line 52 assigns
publicGamestothis.publicGames, so the check on line 55 is always true. This is harmless but slightly confusing. If you apply the guard above, this simplifies naturally.
tests/client/LobbySocket.test.ts
Outdated
| const fetchMock = vi.fn().mockResolvedValue({ | ||
| ok: true, | ||
| json: async () => ({ lobbies: [] }), | ||
| json: async () => [], | ||
| }); |
There was a problem hiding this comment.
HTTP polling mock should return PublicGames shape.
Returning [] won’t match the schema and can make the fallback poll fail silently.
🛠️ Proposed fix
- const fetchMock = vi.fn().mockResolvedValue({
- ok: true,
- json: async () => [],
- });
+ const fetchMock = vi.fn().mockResolvedValue({
+ ok: true,
+ json: async () => ({ serverTime: Date.now(), games: [] }),
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const fetchMock = vi.fn().mockResolvedValue({ | |
| ok: true, | |
| json: async () => ({ lobbies: [] }), | |
| json: async () => [], | |
| }); | |
| const fetchMock = vi.fn().mockResolvedValue({ | |
| ok: true, | |
| json: async () => ({ serverTime: Date.now(), games: [] }), | |
| }); |
🤖 Prompt for AI Agents
In `@tests/client/LobbySocket.test.ts` around lines 87 - 90, The fetchMock in
LobbySocket.test.ts is returning an array (json: async () => []) which does not
match the PublicGames schema; update the mockResolvedValue for fetchMock so the
json() returns a valid PublicGames-shaped object (e.g., the expected top-level
fields and an empty games list) so the LobbySocket polling/fallback code
receives the correct structure; look for the fetchMock variable in the test and
change its mockResolvedValue to return the proper PublicGames payload.
3451eb8 to
ea6ed33
Compare
ea6ed33 to
98c2899
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/PublicLobby.ts (1)
15-17:⚠️ Potential issue | 🔴 CriticalFix:
GameInfois not imported - usePublicGameInfo.The pipeline fails because
GameInfois not defined. This interface should use the importedPublicGameInfotype.🐛 Fix the type reference
export interface ShowPublicLobbyModalEvent { - lobby: GameInfo; + lobby: PublicGameInfo; }
🧹 Nitpick comments (2)
src/client/JoinLobbyModal.ts (1)
45-45: Unused state variableplayerCount.This
@state()property is declared but never updated after initialization. Line 97 computesplayerCountas a local constant fromthis.players?.length. Remove this dead state to avoid confusion.🧹 Suggested removal
`@state`() private players: ClientInfo[] = []; - `@state`() private playerCount: number = 0; `@state`() private gameConfig: GameConfig | null = null;src/client/LobbySocket.ts (1)
126-131: Replacealert()with a non-blocking message.Using
alert()blocks the main thread and provides poor UX. Consider using the sameshow-messagecustom event pattern used elsewhere in the codebase.🔧 Suggested change
if (this.wsConnectionAttempts >= this.maxWsAttempts) { - alert("error connecting to game service"); + window.dispatchEvent( + new CustomEvent("show-message", { + detail: { + message: "Error connecting to game service", + color: "red", + duration: 5000, + }, + }), + ); } else {
98c2899 to
7fda52d
Compare
7fda52d to
c061353
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/server/Worker.ts (1)
77-97:⚠️ Potential issue | 🟡 MinorNormalize worker-stripped URL when only a query is present.
For a request like/w1?x=1,req.urlbecomes?x=1(missing leading/), which can break Express routing.Suggested fix
- const actualPath = match[2] || "/"; + const actualPath = match[2] || "/"; + const normalizedPath = actualPath.startsWith("/") + ? actualPath + : `/${actualPath}`; ... - req.url = actualPath; + req.url = normalizedPath;src/server/GameServer.ts (1)
84-92:⚠️ Potential issue | 🟠 MajorGuard against missing startsAt for public games.
phase()now usesthis.startsAt!; if a public game is created without it, comparisons becomeNaNand phase logic can flip to Active too early. Add a fallback or fail fast.Suggested fallback (keeps old behavior)
constructor( public readonly id: string, readonly log_: Logger, public readonly createdAt: number, private config: ServerConfig, public gameConfig: GameConfig, private lobbyCreatorID?: string, private startsAt?: number, ) { this.log = log_.child({ gameID: id }); + if (this.gameConfig.gameType === GameType.Public && this.startsAt === undefined) { + this.startsAt = this.createdAt + this.config.gameCreationRate(); + } }Also applies to: 782-792
src/client/PublicLobby.ts (1)
45-64:⚠️ Potential issue | 🟡 MinorPotential null dereference after null check.
Line 49 checks
if (this.publicGames)but line 52 accessesthis.publicGames.gamesoutside that block. Sincethis.publicGamesis assigned on line 46, this works, but the logic would be clearer if line 52 also used the local variable or was inside the check.Suggested fix for clarity
private handleLobbiesUpdate(publicGames: PublicGames) { this.publicGames = publicGames; // Calculate offset between server time and client time - if (this.publicGames) { - this.serverTimeOffset = this.publicGames.serverTime - Date.now(); - } - this.publicGames.games.forEach((l) => { + this.serverTimeOffset = publicGames.serverTime - Date.now(); + publicGames.games.forEach((l) => { if (!this.lobbyIDToStart.has(l.gameID)) {
🧹 Nitpick comments (4)
src/client/LobbySocket.ts (1)
109-114: No fallback after max WebSocket attempts.After 3 failed WebSocket attempts, the client just logs an error and stops trying. Consider whether a longer retry interval or user notification would be better UX.
src/server/MasterLobbyService.ts (1)
15-19: Unused interfaceMasterLobbyServiceOptions.This interface is defined but not used by the constructor. Either use it as the constructor parameter type or remove it.
Option 1: Use the interface
- constructor( - private config: ServerConfig, - private playlist: MapPlaylist, - private log: winston.Logger, - ) {} + constructor(options: MasterLobbyServiceOptions) { + this.config = options.config; + this.playlist = options.playlist; + this.log = options.log; + }Option 2: Remove unused interface
-export interface MasterLobbyServiceOptions { - config: ServerConfig; - playlist: MapPlaylist; - log: typeof logger; -}src/client/JoinLobbyModal.ts (1)
45-45: Unused state propertyplayerCount.Line 97 derives player count from
this.players?.length ?? 0at render time. TheplayerCountstate property on line 45 appears unused. Consider removing it.Suggested removal
`@state`() private players: ClientInfo[] = []; -@state() private playerCount: number = 0; `@state`() private gameConfig: GameConfig | null = null;src/server/IPCBridgeSchema.ts (1)
8-15: Type exports reference schemas defined later.TypeScript hoists const declarations, so this works. However, placing types after their schema definitions would be more readable.
Alternative organization
+// --- Worker Messages --- + +const WorkerLobbyListSchema = z.object({ + type: z.literal("lobbyList"), + lobbies: z.array(PublicGameInfoSchema), +}); + +const WorkerReadySchema = z.object({ + type: z.literal("workerReady"), + workerId: z.number(), +}); + +export const WorkerMessageSchema = z.discriminatedUnion("type", [ + WorkerLobbyListSchema, + WorkerReadySchema, +]); + export type WorkerLobbyList = z.infer<typeof WorkerLobbyListSchema>; export type WorkerReady = z.infer<typeof WorkerReadySchema>; +export type WorkerMessage = z.infer<typeof WorkerMessageSchema>; + +// ... same for master messages
c061353 to
ccdb044
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/JoinLobbyModal.ts (1)
535-550:⚠️ Potential issue | 🟡 MinorConsider using
serverTimeoffset for accurate countdown.
PublicLobbycomputesserverTimeOffset = serverTime - Date.now()to sync clocks, butJoinLobbyModal.updateFromLobby()useslobby.startsAtdirectly without offset adjustment. SinceGameInfonow includesserverTime, the same offset calculation should apply here.🛠️ Suggested fix
private updateFromLobby(lobby: GameInfo) { this.players = lobby.clients ?? []; - this.lobbyStartAt = lobby.startsAt ?? null; + if (lobby.startsAt !== undefined) { + const serverTimeOffset = lobby.serverTime - Date.now(); + this.lobbyStartAt = lobby.startsAt - serverTimeOffset; + } else { + this.lobbyStartAt = null; + } this.syncCountdownTimer();
🧹 Nitpick comments (2)
src/server/Worker.ts (1)
51-56: WorkerLobbyService integration looks correct.The
noServer: trueoption for WebSocketServer combined with delegating upgrade handling tolobbyServiceis the right pattern. The service encapsulates lobby WebSocket concerns cleanly.One issue:
WorkerLobbyServicedoes not receiveworkerIdin its constructor, but it needs this to validate worker prefixes on WebSocket upgrades (as noted in the review of that file).Suggested change
- const lobbyService = new WorkerLobbyService(server, wss, gm, log); + const lobbyService = new WorkerLobbyService(server, wss, gm, log, workerId);And update
WorkerLobbyServiceconstructor accordingly.src/server/MasterLobbyService.ts (1)
56-60: Consider: removeWorker could leave stale lobby data temporarily.When a worker crashes,
removeWorkerclears its lobbies. However, ifbroadcastLobbiesormaybeScheduleLobbyis running concurrently, they may see inconsistent state briefly. In practice this is low risk since:
- Broadcasts are frequent (250ms) so stale data clears quickly
- Scheduling only creates new lobbies, doesn't depend on exact counts
Not blocking, just noting for awareness.
ccdb044 to
e20afc6
Compare
e20afc6 to
ec67b6d
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/server/GameServer.ts (1)
84-92:⚠️ Potential issue | 🟠 MajorGuard public games when
startsAtis missing.If a public game is created without
startsAt(e.g., via the HTTP create_game path), phase logic usesundefinedand the game can jump to Active immediately; lobby broadcasts can also send invalid data. Please enforcestartsAtfor public games or fall back to a safe value.🛠️ Suggested guard
// Public Games + if (this.startsAt === undefined) { + this.log.warn("Public game missing startsAt", { gameID: this.id }); + return GamePhase.Lobby; + } - - const lessThanLifetime = Date.now() < this.startsAt!; + const lessThanLifetime = now < this.startsAt; @@ - const warmupOver = now > this.startsAt! + 30 * 1000; + const warmupOver = now > this.startsAt + 30 * 1000;Also applies to: 782-792, 812-814
src/core/Schemas.ts (1)
1-3:⚠️ Potential issue | 🟡 MinorUse correct import attribute syntax to fix JSON module inconsistency.
Align JSON imports with the codebase pattern used in PlayerPanel.ts and other files. Use
with { type: "json" }instead of plain imports to ensure consistent module handling across the codebase.🛠️ Suggested fix
-import countries from "resources/countries.json"; +import countries from "resources/countries.json" with { type: "json" }; -import quickChatData from "resources/QuickChat.json"; +import quickChatData from "resources/QuickChat.json" with { type: "json" };src/client/JoinLobbyModal.ts (1)
28-31:⚠️ Potential issue | 🔴 CriticalChange to type-only import for
JoinLobbyEvent.
JoinLobbyEventis an interface exported from Main.ts but only used in type positions (line 339 usesas JoinLobbyEvent). Import it as a type to avoid unused runtime code:Suggested fix
-import { JoinLobbyEvent } from "./Main"; +import type { JoinLobbyEvent } from "./Main";Also applies to: 332-343
🧹 Nitpick comments (1)
src/server/MasterLobbyService.ts (1)
97-131: The polling loop already handles async errors; consider optional try/catch for method-specific logging.The
startPollinghelper in PollingLoop.ts already catches rejected promises with.catch()and logs them (line 16-18), so there is no risk of unhandled rejections. However, wrappingmaybeScheduleLobby()in try/catch could still improve debugging by logging errors with more specific context about what failed during scheduling, rather than only the generic polling-level error. This is optional and helpful for clearer error messages, not required to prevent crashes.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/JoinLobbyModal.ts (1)
331-342:⚠️ Potential issue | 🔴 CriticalFix missing
JoinLobbyEventtype import (CI failure).
The file usesJoinLobbyEventbut no longer imports it, which matches the CI error. Add the import so the build passes.🛠️ Suggested fix
import { ClientInfo, GAME_ID_REGEX, GameConfig, GameInfo, GameRecordSchema, + JoinLobbyEvent, LobbyInfoEvent, } from "../core/Schemas";
🤖 Fix all issues with AI agents
In `@src/server/GameServer.ts`:
- Around line 784-792: The phase() method uses the optional property startsAt
with non-null assertions (this.startsAt!) which can be undefined for public
games created via the REST API; update phase() to guard against undefined
startsAt before using it (check this.startsAt !== undefined or this.startsAt
instanceof Number) and handle the absent case (treat as no scheduled start: skip
lessThanLifetime/warmup logic or return Lobby/appropriate default), or
alternatively enforce startsAt at creation by validating in the constructor or
the REST creation path (Worker.ts / MasterLobbyService) to throw/require
startsAt for GameType.Public; update references in phase() (the lessThanLifetime
and warmupOver calculations) to use the guarded value.
🧹 Nitpick comments (2)
src/client/LobbySocket.ts (1)
104-108: Consider retry with different worker on max attempts.When max attempts are reached, the code just logs an error. Since workers are chosen randomly, trying a different worker might succeed if the original one is down.
Optional: retry with different worker
private handleClose() { console.log("WebSocket disconnected, attempting to reconnect..."); if (!this.wsAttemptCounted) { this.wsAttemptCounted = true; this.wsConnectionAttempts++; } if (this.wsConnectionAttempts >= this.maxWsAttempts) { - console.error("Max WebSocket attempts reached"); + console.error("Max WebSocket attempts reached, trying different worker"); + this.wsConnectionAttempts = 0; + this.start(); // Pick new random worker and retry } else { this.scheduleReconnect(); } }src/client/PublicLobby.ts (1)
45-63: Remove stale lobby start entries when lobbies disappear.
lobbyIDToStartnever drops old game IDs, so long sessions can grow the map without bound. Prune entries that are no longer in the current lobby list.♻️ Possible cleanup
private handleLobbiesUpdate(publicGames: PublicGames) { this.publicGames = publicGames; // Calculate offset between server time and client time if (this.publicGames) { this.serverTimeOffset = this.publicGames.serverTime - Date.now(); } + const activeIds = new Set(this.publicGames.games.map((g) => g.gameID)); + for (const id of this.lobbyIDToStart.keys()) { + if (!activeIds.has(id)) this.lobbyIDToStart.delete(id); + } this.publicGames.games.forEach((l) => { if (!this.lobbyIDToStart.has(l.gameID)) { // Convert server's startsAt to client time by subtracting offset const startsAt = l.startsAt ?? Date.now(); this.lobbyIDToStart.set(l.gameID, startsAt - this.serverTimeOffset); }
| const lessThanLifetime = Date.now() < this.startsAt!; | ||
| const notEnoughPlayers = | ||
| this.gameConfig.gameType === GameType.Public && | ||
| this.gameConfig.maxPlayers && | ||
| this.activeClients.length < this.gameConfig.maxPlayers; | ||
| if (lessThanLifetime && notEnoughPlayers) { | ||
| return GamePhase.Lobby; | ||
| } | ||
| const warmupOver = | ||
| now > this.createdAt + this.config.gameCreationRate() + 30 * 1000; | ||
| const warmupOver = now > this.startsAt! + 30 * 1000; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the GameServer.ts file structure around the constructor and startsAt usage
fd "GameServer.ts" --type fRepository: openfrontio/OpenFrontIO
Length of output: 91
🏁 Script executed:
# Check the constructor and initialization of startsAt
rg -n "private startsAt" src/server/GameServer.ts -A 2 -B 2Repository: openfrontio/OpenFrontIO
Length of output: 233
🏁 Script executed:
# Look for where startsAt is assigned/set
rg -n "this.startsAt\s*=" src/server/GameServer.ts -B 3 -A 3Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check the createGame method and how public games are initialized
rg -n "createGame" src/server/GameServer.ts -A 10Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Search for GameServer instantiation and startsAt parameter
rg -n "new GameServer" src/ -A 5Repository: openfrontio/OpenFrontIO
Length of output: 344
🏁 Script executed:
# Get more context around the GameServer instantiation in GameManager.ts
sed -n '63,85p' src/server/GameManager.tsRepository: openfrontio/OpenFrontIO
Length of output: 670
🏁 Script executed:
# Look at the full GameServer constructor to understand all parameters
sed -n '80,100p' src/server/GameServer.tsRepository: openfrontio/OpenFrontIO
Length of output: 692
🏁 Script executed:
# Check all places where startsAt is used with ! operator
rg -n "startsAt!" src/server/GameServer.ts -B 1 -A 1Repository: openfrontio/OpenFrontIO
Length of output: 295
🏁 Script executed:
# Look at the phase() method more carefully to understand the context
sed -n '780,820p' src/server/GameServer.tsRepository: openfrontio/OpenFrontIO
Length of output: 1070
🏁 Script executed:
# Check if startsAt is set anywhere after construction
rg -n "startsAt" src/server/GameServer.ts | head -20Repository: openfrontio/OpenFrontIO
Length of output: 258
🏁 Script executed:
# Look for any method that might set startsAt or handle public game initialization
rg -n "public.*game\|private.*game\|startsAt" src/server/GameServer.ts -A 2 | head -40Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check GameManager to see if there's different handling for public vs private games
sed -n '50,100p' src/server/GameManager.tsRepository: openfrontio/OpenFrontIO
Length of output: 1218
🏁 Script executed:
# Check if there's a separate factory or builder for public games
rg -n "GameType.Public" src/server/ -B 3 -A 5Repository: openfrontio/OpenFrontIO
Length of output: 4517
🏁 Script executed:
# Check where MapPlaylist creates games to see if startsAt is provided for public games
rg -n "this.gameManager.createGame\|gameManager.createGame" src/server/ -B 2 -A 5Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check the Worker.ts file where public games are created (line 149 mentioned GameType.Public)
sed -n '140,170p' src/server/Worker.tsRepository: openfrontio/OpenFrontIO
Length of output: 1124
🏁 Script executed:
# Search for all callers of createGame to understand the pattern
rg -n "\.createGame\(" src/server/ -B 1 -A 3Repository: openfrontio/OpenFrontIO
Length of output: 1276
🏁 Script executed:
# Check if startsAt is assigned anywhere after construction
rg -n "this\.startsAt\s*=" src/server/GameServer.tsRepository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Look at the full context of the phase() method to understand the flow
sed -n '760,800p' src/server/GameServer.tsRepository: openfrontio/OpenFrontIO
Length of output: 1170
🏁 Script executed:
# Check the git history to see what changed regarding startsAt
git log --oneline -20 -- src/server/GameServer.tsRepository: openfrontio/OpenFrontIO
Length of output: 90
🏁 Script executed:
# Look at how the code was before the startsAt field was added
git show HEAD:src/server/GameServer.ts | sed -n '780,800p'Repository: openfrontio/OpenFrontIO
Length of output: 625
🏁 Script executed:
# Check all parameters passed to createGame for public games specifically
rg -n "gameType.*Public\|GameType.Public" src/server/ -B 5 -A 5 | grep -A 5 "createGame"Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Let me check the complete createGame method signature and all its call sites more carefully
ast-grep --pattern 'createGame(
$$$
) {
$$$
}'Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check if startsAt is supposed to be required for public games by looking at related code
rg -n "startsAt" src/server/ --type ts -B 2 -A 2 | head -50Repository: openfrontio/OpenFrontIO
Length of output: 2643
🏁 Script executed:
# Check MasterLobbyService to understand public game creation better
sed -n '90,130p' src/server/MasterLobbyService.tsRepository: openfrontio/OpenFrontIO
Length of output: 1033
🏁 Script executed:
# Verify if all public games go through MasterLobbyService
rg -n "PublicGameInfo\|MasterCreateGame" src/server/ -B 2 -A 2Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check if there are any direct private game creations with startsAt
rg -n "createGame.*startsAt\|GameType.Private" src/server/ -B 2 -A 2 | head -30Repository: openfrontio/OpenFrontIO
Length of output: 49
🏁 Script executed:
# Check the context around line 528 in Worker.ts to see what type of game is created
sed -n '520,540p' src/server/Worker.tsRepository: openfrontio/OpenFrontIO
Length of output: 293
🏁 Script executed:
# Check the playlist.get1v1Config() to confirm it creates public games
rg -n "get1v1Config\|get1v1" src/server/ -A 5Repository: openfrontio/OpenFrontIO
Length of output: 293
Non-null assertion on optional startsAt can throw at runtime for public games created via REST API.
The constructor accepts startsAt as optional, but phase() uses this.startsAt! for public game logic (lines 784, 792). While public games scheduled via MasterLobbyService always provide startsAt, public games created directly through the REST API endpoint (Worker.ts) do not pass this parameter, leaving it undefined. This causes a crash when phase() executes.
Ensure startsAt is required for public games at creation time, or guard the field before using it.
🤖 Prompt for AI Agents
In `@src/server/GameServer.ts` around lines 784 - 792, The phase() method uses the
optional property startsAt with non-null assertions (this.startsAt!) which can
be undefined for public games created via the REST API; update phase() to guard
against undefined startsAt before using it (check this.startsAt !== undefined or
this.startsAt instanceof Number) and handle the absent case (treat as no
scheduled start: skip lessThanLifetime/warmup logic or return Lobby/appropriate
default), or alternatively enforce startsAt at creation by validating in the
constructor or the REST creation path (Worker.ts / MasterLobbyService) to
throw/require startsAt for GameType.Public; update references in phase() (the
lessThanLifetime and warmupOver calculations) to use the guarded value.
ec67b6d to
d3f724b
Compare
d3f724b to
b57d927
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/server/MasterLobbyService.ts`:
- Around line 75-79: The current getAllLobbies method in MasterLobbyService
collects lobbies from workerLobbies and sorts them with the newest/most-future
starts first, making index 0 the farthest start; change the comparator to sort
ascending so the soonest-starting lobby is first (i.e., reverse the subtraction
order in the sort comparator used in getAllLobbies/ PublicGameInfo list and
handle possible undefined startsAt safely if needed).
b57d927 to
1267dfe
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/server/MasterLobbyService.ts`:
- Around line 75-80: The sort comparator in getAllLobbies currently subtracts
a.startsAt from itself (a.startsAt! - a.startsAt) so sorting is a no-op; update
the comparator to subtract b.startsAt instead (use a.startsAt! - b.startsAt!) so
the Array.from(this.workerLobbies.values()).flat().sort(...) call on
PublicGameInfo entries orders by startsAt correctly; ensure you reference the
startsAt property on both operands in the comparator.
1267dfe to
910e456
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/JoinLobbyModal.ts (1)
331-342:⚠️ Potential issue | 🟡 MinorAdd
source: "public"to the joinPublicLobby event dispatcher.The
JoinLobbyEventtype is already imported from./Main. However, when dispatching the join-lobby event for public lobbies, addsource: "public"to the event detail. This tells Main.ts to skip URL updates, which prevents unnecessary cache-busting of lobby preview metadata.🛠️ Suggested fix
private joinPublicLobby(lobbyId: string) { // Dispatch join-lobby event to actually connect to the lobby this.dispatchEvent( new CustomEvent("join-lobby", { detail: { gameID: lobbyId, clientID: this.currentClientID, + source: "public", } as JoinLobbyEvent,
🤖 Fix all issues with AI agents
In `@src/server/Master.ts`:
- Around line 108-115: The code currently calls parseInt on workerId and passes
the result to lobbyService.removeWorker without checking for NaN; update the
block around workerId/workerIdNum in Master.ts to validate parseInt returned a
valid number (use Number.isNaN or isNaN) after const workerIdNum =
parseInt(workerId) and if it is NaN log a clear error referencing the original
workerId and return early instead of calling lobbyService.removeWorker; apply
the same validation pattern where parseInt(workerId) is used for registerWorker
to avoid calling registerWorker/removeWorker with NaN.
| if (workerId === undefined) { | ||
| log.error(`worker crashed could not find id`); | ||
| return; | ||
| } | ||
|
|
||
| const workerIdNum = parseInt(workerId); | ||
| lobbyService.removeWorker(workerIdNum); | ||
|
|
There was a problem hiding this comment.
Validate workerId before parsing.
parseInt can return NaN; then removeWorker/registerWorker runs with NaN and skips cleanup.
🛠️ Suggested guard
- const workerIdNum = parseInt(workerId);
+ const workerIdNum = Number.parseInt(workerId, 10);
+ if (!Number.isFinite(workerIdNum)) {
+ log.error(`worker crashed with invalid id: ${workerId}`);
+ return;
+ }
lobbyService.removeWorker(workerIdNum);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (workerId === undefined) { | |
| log.error(`worker crashed could not find id`); | |
| return; | |
| } | |
| const workerIdNum = parseInt(workerId); | |
| lobbyService.removeWorker(workerIdNum); | |
| if (workerId === undefined) { | |
| log.error(`worker crashed could not find id`); | |
| return; | |
| } | |
| const workerIdNum = Number.parseInt(workerId, 10); | |
| if (!Number.isFinite(workerIdNum)) { | |
| log.error(`worker crashed with invalid id: ${workerId}`); | |
| return; | |
| } | |
| lobbyService.removeWorker(workerIdNum); |
🤖 Prompt for AI Agents
In `@src/server/Master.ts` around lines 108 - 115, The code currently calls
parseInt on workerId and passes the result to lobbyService.removeWorker without
checking for NaN; update the block around workerId/workerIdNum in Master.ts to
validate parseInt returned a valid number (use Number.isNaN or isNaN) after
const workerIdNum = parseInt(workerId) and if it is NaN log a clear error
referencing the original workerId and return early instead of calling
lobbyService.removeWorker; apply the same validation pattern where
parseInt(workerId) is used for registerWorker to avoid calling
registerWorker/removeWorker with NaN.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/JoinLobbyModal.ts (1)
62-74:⚠️ Potential issue | 🟠 MajorApply server time offset to
startsAtbefore storing inlobbyStartAt.Line 537 stores
startsAtdirectly from the server without adjusting for the server-client clock difference. TheGameInfopayload includes bothstartsAt(server time) andserverTime(sent reference), butJoinLobbyModaldoes not calculate or apply the offset. This causes countdown and timeout checks at lines 85-86 and 600 to drift if server and client clocks differ. Follow the pattern inPublicLobby.ts: calculateserverTimeOffset = lobby.serverTime - Date.now(), then storelobby.startsAt - serverTimeOffset(or adjust when reading). Also applies to line 73.
🤖 Fix all issues with AI agents
In `@src/server/MasterLobbyService.ts`:
- Around line 75-78: The sort and max computations assume
PublicGameInfo.startsAt exists and can become NaN; update getAllLobbies and the
code that computes next startsAt (the Math.max usage around lines 105-108) to
guard against undefined by treating undefined as a very small/zero value or by
filtering them out: in getAllLobbies() ensure you only sort items with a defined
startsAt (or use a fallback like 0 or Number.MIN_SAFE_INTEGER in the
comparator), and when calling Math.max over lobbies map only their startsAt
values that are numbers (or provide a default numeric fallback) so the result
cannot be NaN and newly created lobbies receive a valid numeric startsAt.
| private getAllLobbies(): PublicGameInfo[] { | ||
| const lobbies = Array.from(this.workerLobbies.values()) | ||
| .flat() | ||
| .sort((a, b) => a.startsAt! - b.startsAt); |
There was a problem hiding this comment.
Guard startsAt to avoid NaN scheduling.
Line 78 and Line 106 assume startsAt is always set. If any lobby has startsAt undefined, Math.max and the sort can produce NaN, and new lobbies get a NaN startsAt. Please add a safe fallback.
🛠️ Suggested fix
private getAllLobbies(): PublicGameInfo[] {
- const lobbies = Array.from(this.workerLobbies.values())
+ const now = Date.now();
+ const lobbies = Array.from(this.workerLobbies.values())
.flat()
- .sort((a, b) => a.startsAt! - b.startsAt);
+ .sort((a, b) => (a.startsAt ?? now) - (b.startsAt ?? now));
return lobbies;
}
@@
- const lastStart = lobbies.reduce(
- (max, pb) => Math.max(max, pb.startsAt),
- Date.now(),
- );
+ const lastStart = lobbies.reduce(
+ (max, pb) => Math.max(max, pb.startsAt ?? max),
+ Date.now(),
+ );Also applies to: 105-108
🤖 Prompt for AI Agents
In `@src/server/MasterLobbyService.ts` around lines 75 - 78, The sort and max
computations assume PublicGameInfo.startsAt exists and can become NaN; update
getAllLobbies and the code that computes next startsAt (the Math.max usage
around lines 105-108) to guard against undefined by treating undefined as a very
small/zero value or by filtering them out: in getAllLobbies() ensure you only
sort items with a defined startsAt (or use a fallback like 0 or
Number.MIN_SAFE_INTEGER in the comparator), and when calling Math.max over
lobbies map only their startsAt values that are numbers (or provide a default
numeric fallback) so the result cannot be NaN and newly created lobbies receive
a valid numeric startsAt.
Description:
Currently only the master process sends public lobby updates to clients. This is not scalable since it could overload the master process.
In this PR, the master uses IPC to send public lobby info to all workers. Then clients connect to a random worker to get public lobby updates via websocket. This way clients never connect directly to the master websocket.
The flow looks like this:
Every 100ms:
Please complete the following:
Please put your Discord username so you can be contacted if a bug or regression is found:
evan