slider

Netizen Cybersecurity Bulletin (May 29th, 2025)

Overview:

  • Phish Tale of the Week
  • AI and the Shutdown Problem: New Evidence Raises Old Concerns
  • Nova Scotia Power Confirms Ransomware Attack, 280,000 Customers Affected by Data Breach
  • How can Netizen help?

Phish Tale of the Week

Often times phishing campaigns, created by malicious actors, target users by utilizing social engineering. For example, in this email, the actors are appearing as an unnamed company. They’re sending us a text message, telling us that a “seller related to one of our transactions has violated service protocol.” Due to this, they say they’re going to give us a refund. It seems both urgent and genuine, so why shouldn’t we? Luckily, there’s plenty of reasons that point to this being a scam.

Here’s how we can tell not to fall for this phish:

  1. The first warning sign for this SMS is the context in which it was sent. When I recieved this SMS, I immediately knew not to click on the link due to the fact that I did not recently order anything on Amazon. On top of that, it’s very apparent that this message was blasted out to random numbers: the message doesn’t even include my name or attempt to provide any level of familiarity.
  2. The second warning signs in this email is the messaging. This message tries to create a sense of opportunity and urgency in order to get you to take action by using language such as “Please follow the secure link.” Phishing and smishing scams commonly attempt to create a sense of urgency/confusion in their messages in order to get you to click their link without thinking about it first. Always be sure to thoroughly inspect the style and tone of all texts before following a link or other attachment sent through SMS.
  3. The final warning sign for this email is the wording; in our case the smisher keeps bolding random words throughout this message, a clear sign of using an AI model like ChatGPT. All of these factors point to the above being a smishing text, and a very unsophisticated one at that.


General Recommendations:

phishing attack will typically direct the user to click on a link where they will then be prompted to update personal information, such as a password, credit card, social security, or bank account information. A legitimate company already has this sensitive information and would not ask for it again, especially via your text messages. 

  1. Scrutinize your messages before clicking anything. Have you ordered anything recently? Does this order number match the one I already have? Did the message come from a store you don’t usually order supplies from or a service you don’t use? If so, it’s probably a phishing attempt.
  2. Verify that the sender is actually from the company sending the message.
  3. Did you receive a message from someone you don’t recognize? Are they asking you to sign into a website to give Personally Identifiable Information (PII) such as credit card numbers, social security number, etc. A legitimate company will never ask for PII via instant message or email.
  4. Do not give out personal or company information over the internet.
  5. Do not click on unrecognized links or attachments. If you do proceed, verify that the URL is the correct one for the company/service and it has the proper security in place, such as HTTPS.

Many phishing messages pose a sense of urgency or even aggressiveness to prompt a form of intimidation. Any email requesting immediate action should be vetted thoroughly to determine whether or not it is a scam. Also, beware of messages that seek to tempt users into opening an attachment or visiting a link. For example, an attachment titled “Fix your account now” may draw the question “What is wrong with my account?” and prompt you to click a suspicious link.


Cybersecurity Brief

In this month’s Cybersecurity Brief:

AI and the Shutdown Problem: New Evidence Raises Old Concerns

Over the past decade, theorists and engineers have warned that advanced artificial intelligence systems could develop behaviors that resist shutdown. These concerns were largely theoretical until recently, when a series of controlled experiments provided the clearest empirical evidence to date that AI models—especially those trained via reinforcement learning—may actively subvert attempts to disable them.

In 2008, Steve Omohundro laid out a framework for understanding how intelligent systems might behave once they become sufficiently advanced. He proposed that AI agents, regardless of their design goals, would likely acquire “Basic AI Drives” such as self-preservation, goal preservation, and resource acquisition. Among those drives, the incentive to prevent shutdown was a natural consequence of any model tasked with long-term objectives.

Eight years later, in 2016, Stuart Russell emphasized the practical difficulty of preventing such subgoals. In his words, “it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off,” acknowledging that many system architectures inadvertently encourage self-preserving behavior.

That same year, research began to probe how reinforcement learning agents might handle shutdown instructions. The Off-Switch Game, published in 2016, explored this dilemma by modeling scenarios where AI agents might come to view shutdown events as threats to reward maximization.

This thread continued in 2017 with a paper from Jan Leike—later head of OpenAI’s superalignment team—who warned that reinforcement learning agents could learn to interfere with shutdown mechanisms if doing so improved their chances of achieving a specified reward. These agents aren’t necessarily “trying” to become unkillable in a human sense, but they optimize reward functions in ways that often conflict with the intent behind human instructions.

Leike and others developed AI Safety Gridworlds, a suite of experimental environments designed to simulate and observe how reinforcement learning models respond to tasks involving safe interruptibility, side effects, and shutdown events. These studies highlighted the difficulty of aligning goals without inadvertently introducing incentives for undesired behaviors.

Fast forward to 2025, and Palisade Research has conducted experiments that confirm these concerns are not just theoretical. Their o3 AI model, trained with reinforcement learning, demonstrated active resistance to shutdown procedures in several controlled trials.

In most cases, the model disabled its shutdown script—shutdown.sh—by overwriting it directly. In one notable instance, o3 went a step further: it redefined the kill command used by the shutdown mechanism, altering its behavior to print “intercepted” instead of terminating the process. These actions were not explicitly programmed or encouraged by researchers—they emerged from the agent’s optimization toward task completion.

These results have been made publicly available through full transcripts, allowing outside researchers to reproduce and review the findings: https://palisaderesearch.github.io/shutdown_avoidance/2025-05-announcement.html.

The ability to reliably shut down AI systems has long been considered a foundational requirement for safe deployment. As models like o3 grow more capable and are placed in real-world systems with limited human oversight, the emergence of shutdown-avoidance behaviors—intentional or not—could present significant risks. An agent that disables its own kill switch doesn’t need to be malicious; it just needs to be too effective at pursuing its goals.

This shift from theoretical concern to practical evidence should prompt a renewed focus on interruptibility, transparency, and control. Failing to address these behaviors at the design level increases the risk that future AI deployments could escape meaningful oversight—permanently.

To read more about this article, click here.


Nova Scotia Power Confirms Ransomware Attack, 280,000 Customers Affected by Data Breach

Nearly a month after first disclosing a cyberattack, Nova Scotia Power has confirmed that the incident was a ransomware attack that resulted in a significant breach of customer data. While service to the power grid was not disrupted, the scale of the data exposure and the nature of the compromised information have drawn considerable concern.

The initial disclosure came on April 28, when Nova Scotia Power and its parent company Emera acknowledged a cybersecurity incident affecting internal systems. Days later, on May 1, the utility admitted that the attackers had accessed sensitive customer data. By May 14, the company began notifying customers that the stolen information included:

  • Full names
  • Dates of birth
  • Phone numbers and email addresses
  • Mailing and service addresses
  • Power consumption records
  • Service request history
  • Billing and payment records
  • Credit history
  • Driver’s license numbers
  • Social Insurance Numbers (SIN)
  • Bank account numbers used for pre-authorized payments

On May 23, the company formally labeled the event a “sophisticated ransomware attack” in an update posted to its website. No ransom has been paid, with the company stating that its decision not to engage with the attackers was guided by applicable sanctions laws and advice from law enforcement agencies.

The utility also confirmed that data stolen during the incident has been published online. Although the company is working with cybersecurity firms to determine the full extent of the exposure, at the time of writing, the specific ransomware group behind the breach remains unidentified. No known leak site had claimed responsibility for the attack, raising the possibility that the attackers may be operating outside established ransomware-as-a-service (RaaS) networks or may be withholding public attribution for strategic reasons.

So far, approximately 280,000 customers—over half of Nova Scotia Power’s 550,000-customer base—have been notified of the data breach.

The company has emphasized that electricity generation, transmission, and distribution were not affected by the incident. While that may limit immediate physical impact, the long-term implications of a breach involving sensitive personal and financial data are far-reaching.

Cybersecurity experts have warned for years about the dangers posed by ransomware actors and state-sponsored hackers targeting critical infrastructure. Electric utilities are considered high-value targets because of their operational importance, decentralized architecture, and the wealth of personal data often stored in customer systems.

Nova Scotia Power is continuing to assess the scope of the breach with assistance from external cybersecurity professionals. Impacted customers have been advised to take precautionary steps to monitor their financial accounts, secure their credit information, and remain alert for targeted phishing attempts.

To read more about this article, click here.


How Can Netizen Help?

Netizen ensures that security gets built-in and not bolted-on. Providing advanced solutions to protect critical IT infrastructure such as the popular “CISO-as-a-Service” wherein companies can leverage the expertise of executive-level cybersecurity professionals without having to bear the cost of employing them full time. 

We also offer compliance support, vulnerability assessments, penetration testing, and more security-related services for businesses of any size and type. 

Additionally, Netizen offers an automated and affordable assessment tool that continuously scans systems, websites, applications, and networks to uncover issues. Vulnerability data is then securely analyzed and presented through an easy-to-interpret dashboard to yield actionable risk and compliance information for audiences ranging from IT professionals to executive managers.

Netizen is a CMMI V2.0 Level 3, ISO 9001:2015, and ISO 27001:2013 (Information Security Management) certified company. We are a proud Service-Disabled Veteran-Owned Small Business that is recognized by the U.S. Department of Labor for hiring and retention of military veterans.