Conversation
zerebos
left a comment
There was a problem hiding this comment.
This doesn't have some necessary sanity checking, take a look at detectspam event, should check if in guild, if bot, channel permissions, etc. This is also not toggleable, it feels like maybe it should be collapsed into detect spam.
Also code-wise there are other issues but I can fix after merging
There was a problem hiding this comment.
Pull request overview
This PR refactors the spam detection system by consolidating multiple regex patterns and their validation logic into a unified data-driven approach using a pattern configuration array.
Changes:
- Refactored spam detection from separate regex variables to a centralized
phishingPatternsarray with configurable predicates - Updated embed logging to support multiple reasons instead of a single reason
- Removed error handling for message deletion (commented out old code but removed try-catch from new implementation)
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 10 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; | ||
| }).filter(Boolean); |
There was a problem hiding this comment.
Missing error handling for URL parsing. If the URL is malformed, URL.parse or new URL() will throw an error. Wrap the URL parsing in a try-catch block to prevent the entire spam detection from failing on malformed URLs.
| { | ||
| regex: /(?:http[s]?:\/\/.)?(?:www\.)?[-a-zA-Z0-9@%._+~#=]{2,256}\.[a-z]{2,6}\b[-a-zA-Z0-9@:%_+.~#?&\/=]*/ig, | ||
| whitelist: [], | ||
| predicate: (links, self) => links.length == self.maxCount, // this should probably be more than 4 later on. |
There was a problem hiding this comment.
The equality check should use strict equality. Replace '==' with '===' to avoid type coercion issues and follow JavaScript best practices.
| predicate: (links, self) => links.length == self.maxCount, // this should probably be more than 4 later on. | |
| predicate: (links, self) => links.length === self.maxCount, // this should probably be more than 4 later on. |
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; | ||
| }).filter(Boolean); | ||
| return hosts.some(host => !self.whitelist.includes(host)); |
There was a problem hiding this comment.
The whitelist check should use the full hostname, not just the host. URL.parse().host may include the port number, which could cause legitimate URLs with ports to be flagged. Consider comparing against hostname instead or normalize the comparison.
| const phishingPatterns = [ | ||
| { | ||
| regex: /([a-zA-Z-\\.]+)?d[il][il]?scorr?(cl|[ldb])([a-zA-Z-\\.]+)?\.(com|net|app|gift|ru|uk)/ig, | ||
| whitelist: ['discord.com', 'discordapp.com'], |
There was a problem hiding this comment.
The pattern will incorrectly match 'betterdiscord.app' which was explicitly whitelisted in the old code. Add 'betterdiscord.app' to the whitelist array to maintain the same behavior as the previous implementation.
| whitelist: ['discord.com', 'discordapp.com'], | |
| whitelist: ['discord.com', 'discordapp.com', 'betterdiscord.app'], |
| whitelist: [], | ||
| predicate: (links, self) => links.length == self.maxCount, // this should probably be more than 4 later on. |
There was a problem hiding this comment.
This generic URL regex pattern is too broad and will match many legitimate URLs. It will trigger on any message containing 4+ URLs regardless of their legitimacy. Consider adding common legitimate domains to the whitelist or refining this pattern to avoid false positives.
| whitelist: [], | |
| predicate: (links, self) => links.length == self.maxCount, // this should probably be more than 4 later on. | |
| whitelist: [ | |
| 'discord.com', | |
| 'discordapp.com', | |
| 'steamcommunity.com', | |
| 'github.com', | |
| 'gitlab.com', | |
| 'bitbucket.org', | |
| 'google.com', | |
| 'youtube.com', | |
| 'youtu.be', | |
| 'twitch.tv', | |
| 'twitter.com', | |
| 'x.com', | |
| 'reddit.com' | |
| ], | |
| predicate: (links, self) => { | |
| const suspiciousLinks = links.filter(match => { | |
| const url = match[0].toLowerCase(); | |
| return !self.whitelist.some(domain => url.includes(domain)); | |
| }); | |
| return suspiciousLinks.length === self.maxCount; // this should probably be more than 4 later on. | |
| }, |
| const phishingPatterns = [ | ||
| { | ||
| regex: /([a-zA-Z-\\.]+)?d[il][il]?scorr?(cl|[ldb])([a-zA-Z-\\.]+)?\.(com|net|app|gift|ru|uk)/ig, | ||
| whitelist: ['discord.com', 'discordapp.com'], | ||
| predicate: (links, self) => { | ||
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; | ||
| }).filter(Boolean); | ||
| return hosts.some(host => !self.whitelist.includes(host)); | ||
| }, | ||
| reason: 'Fake Discord Domain' | ||
| }, | ||
| { | ||
| regex: /str?e[ea]?mcomm?m?un[un]?[un]?[tl]?[il][tl]?ty\.(com|net|ru|us)/ig, | ||
| whitelist: ['steamcommunity.com'], | ||
| predicate: (links, self) => { | ||
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; | ||
| }).filter(Boolean); | ||
|
|
||
| return hosts.some(host => !self.whitelist.includes(host)); | ||
| }, |
There was a problem hiding this comment.
This predicate function is duplicated in the second pattern (lines 30-37). Consider extracting this into a shared helper function to reduce code duplication and improve maintainability.
| const phishingPatterns = [ | |
| { | |
| regex: /([a-zA-Z-\\.]+)?d[il][il]?scorr?(cl|[ldb])([a-zA-Z-\\.]+)?\.(com|net|app|gift|ru|uk)/ig, | |
| whitelist: ['discord.com', 'discordapp.com'], | |
| predicate: (links, self) => { | |
| const hosts = links.map(match => { | |
| const url = match[0]; | |
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | |
| return URL.parse(fullUrl)?.host; | |
| }).filter(Boolean); | |
| return hosts.some(host => !self.whitelist.includes(host)); | |
| }, | |
| reason: 'Fake Discord Domain' | |
| }, | |
| { | |
| regex: /str?e[ea]?mcomm?m?un[un]?[un]?[tl]?[il][tl]?ty\.(com|net|ru|us)/ig, | |
| whitelist: ['steamcommunity.com'], | |
| predicate: (links, self) => { | |
| const hosts = links.map(match => { | |
| const url = match[0]; | |
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | |
| return URL.parse(fullUrl)?.host; | |
| }).filter(Boolean); | |
| return hosts.some(host => !self.whitelist.includes(host)); | |
| }, | |
| function hasNonWhitelistedHost(links: RegExpMatchArray[], whitelist: string[]): boolean { | |
| const hosts = links.map(match => { | |
| const url = match[0]; | |
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | |
| return URL.parse(fullUrl)?.host; | |
| }).filter(Boolean); | |
| return hosts.some(host => !whitelist.includes(host as string)); | |
| } | |
| const phishingPatterns = [ | |
| { | |
| regex: /([a-zA-Z-\\.]+)?d[il][il]?scorr?(cl|[ldb])([a-zA-Z-\\.]+)?\.(com|net|app|gift|ru|uk)/ig, | |
| whitelist: ['discord.com', 'discordapp.com'], | |
| predicate: (links, self) => hasNonWhitelistedHost(links, self.whitelist), | |
| reason: 'Fake Discord Domain' | |
| }, | |
| { | |
| regex: /str?e[ea]?mcomm?m?un[un]?[un]?[tl]?[il][tl]?ty\.(com|net|ru|us)/ig, | |
| whitelist: ['steamcommunity.com'], | |
| predicate: (links, self) => hasNonWhitelistedHost(links, self.whitelist), |
| whitelist: [], | ||
| predicate: (links, self) => links.length == self.maxCount, // this should probably be more than 4 later on. | ||
| reason: 'Potential Scam Message', | ||
| maxCount: 4 |
There was a problem hiding this comment.
The magic number 4 is used for maxCount without explanation. Consider making this a named constant at the top of the file with a descriptive name like MAX_URLS_BEFORE_SPAM_CHECK or adding a comment explaining why 4 URLs is the threshold.
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; |
There was a problem hiding this comment.
URL.parse is deprecated in Node.js. Use the new URL() constructor instead. The URL.parse method was deprecated in favor of the WHATWG URL API. Replace URL.parse(fullUrl) with new URL(fullUrl).hostname to get the host.
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; |
There was a problem hiding this comment.
URL.parse is deprecated in Node.js. Use the new URL() constructor instead. The URL.parse method was deprecated in favor of the WHATWG URL API. Replace URL.parse(fullUrl) with new URL(fullUrl).hostname to get the host.
| const hosts = links.map(match => { | ||
| const url = match[0]; | ||
| const fullUrl = url.startsWith('http') ? url : `https://${url}`; | ||
| return URL.parse(fullUrl)?.host; | ||
| }).filter(Boolean); |
There was a problem hiding this comment.
Missing error handling for URL parsing. If the URL is malformed, URL.parse or new URL() will throw an error. Wrap the URL parsing in a try-catch block to prevent the entire spam detection from failing on malformed URLs.
this is a low level check.
Please double check on your local machine for testing.