I can't believe that this is still a thing, but if your risk model is noticeably impacted by the adversarial capability of _writing an email in the English language_ then I'm pretty sure your threat model is already broken.
To prove the point that users will continue to click links, regardless of how obvious it is that they shouldn't, I worked with the person in charge of the monthly phishing trainings at $dayjob last month. Historically, they have used the hated ruses like fake gift cards, and I wanted to try to get away from that, especially during the holidays. We ended up using something to the effect of the following:
---
Hello <first name>,
Happy Holidays. This is the monthly phishing test. Yes, really. It's not a trick. Use the <phishing reporting function> to report this as phishing. If you do not know how to use <phishing reporting function>, feel free to ask a colleague. If you still have questions, search for <phishing reporting function> on <internal docs site>.
Do not click the following link as it is there for metrics and will cause you to be assigned phishing awareness training: <phishing training 'malicious' link>
Sincerely,
IT Security Team
---
I don't know how well it was received by users, but I do know that we still had more clicks than two other months in 2023, despite being explicitly told not to click the link. Users will always click links with their link-clicking machines. Relying on their discretion is either ignorant, or I expect in some cases, malicious in that there will always be a scapegoat to blame for the inevitable breach.
This got more responses than I'm used to, which is brilliant, but I don't think I can respond to them all. And based on some of the responses, I don't think I was entirely clear, so here's a bit of a follow-up:
It's possible there is a baseline of clicks recorded by previews, scanners, and users attempting to be careful in how they approach the link ( i.e. curl | less ). However, this is an enterprise product that has been in use for a while, including by this org, and if it was assigning users training that didn't click, I would think it would have been addressed. I don't know for sure though since I don't run that software.
Several people mentioned potential reasons for users clicking: They're curious, they don't care about the org, they're trying to get a new laptop, the training makes for an easy workload for part of a day, etc. The thing is, I don't care. At all. My point in this was to prove that links will continue to get clicked, regardless of how well users are trained or informed. Intent and blame are meaningless here. What matters is that systems are built with that expectation in mind from the start. And while basic user training is beneficial, beyond checking a compliance checkbox, it provides no security benefit.
As far as metrics in relation to other months of "training" in 2023 go, the number of views were roughly the same as other months, the number of reported emails were above average, but not as high as some months with attempted ruses, and the number of clicks was higher than two of the other months. Read into that what you will, but my only takeaway from that is that links get clicked.
I also didn't mention that a big part of why I approached the phishing trainer when I did is because of the human element. End of year with the holidays and layoffs all over the place are a stressful time on their own. Creating a false hope for something like a bonus or gift in the name of security or training is an idea that needs to die. Users, otherwise known as the people who actually keep the org running, are already stressed. Don't make things worse.
If your org uses a third-party solution for phishing training, it is likely that all of the testing emails contain a specific header. Mail filtering is generally configured to allow them to bypass rules and make it to all inboxes as intended. It is also often used to prevent rewriting the URLs in links if your org has a system that does so ( Proofpoint, Barracuda, etc. ).
As an employee, if you don't want to bother with the regular phishing training, look at the message details and see if you can find the header used to bypass protections in your org. Some of the common ones are:
X-Phishtest
X-ThreatSim-Header
X-ThreatSim-ID
X-PhishMeTracking
X-PhishMe
Then in your mail client, set up a rule to take whatever action you wish. You can create an alert, move the message to a specific folder, or even execute a program or script if IT hasn't disabled that function.
I fully support those of you of a chaotic persuasion to take the URLs from your org's phishing messages and fully enumerate the unique identifier section. Just brute force it and see if everyone gets assigned phishing training.
It used to be that as an attacker, you could put all of those headers in and likely bypass filters due to the org setting a basic allow rule for one of them for phishing training. However, more orgs have finally either moved to third-party mail service that usually does a better job at filtering, or they are getting around properly configuring SPF, DKIM, and DMARC with strict rules that specify sending domains that are allowed with the header mentioned above. YMMV, of course.