- Phish Tale of the Week
- Google Launches AI Security Initiatives Including Bug Bounty Program and $10 Million AI Safety Fund
- VMware Releases Patches for Critical vCenter Server RCE Vulnerability CVE-2023-34048
- How can Netizen help?
Phish Tale of the Week
Often times phishing campaigns, created by malicious actors, target users by utilizing social engineering. For example, in this text message, the actors are appearing as USPS, the United States Postal Service, and informing you that action needs to be taken regarding your delivery. The message politely explains that “USPS” is holding our package at a warehouse, and that we just need to update our address in order to receive it. It seems both urgent and genuine, so why shouldn’t we visit the link they sent us? Luckily, there’s plenty of reasons that point to this being a scam.
Here’s how we can tell not to click on this smishing link:
- The first red flag in this message is the senders’ address. Always thoroughly inspect the sender’s address to ensure it’s from a trusted sender. In this case, the actors neglected to spoof their messaging address, and a simple look at the sender’s address makes it very apparent that the email is not from USPS. In the future, review the sender’s address thoroughly to see if a text could be coming from a threat actor.
- The second warning signs in this text is the messaging. This message tries to create a sense of urgency by using language such as “cannot be delivered” and “within 12 hours.” Phishing scams commonly attempt to create a sense of urgency in their messages in order to get you to click their link without thinking about it first. Always be sure to thoroughly inspect the style and tone of all texts before following a link sent through SMS.
- The final warning sign for this email is the lack of legitimate USPS information. Fortune 500 companies, the government and similar organizations standardize all communications with customers. This text includes a small “thank you” message at the bottom in an attempt to gain credibility, but it lacks all of the parts of a credible USPS message and can be immediately detected as a phishing attempt.
A smishing attack will typically direct the user to click on a link where they will then be prompted to update personal information, such as a password, credit card, social security, or bank account information. A legitimate company already has this sensitive information and would not ask for it again, especially via your text messages.
- Scrutinize your messages before clicking anything. Have you ordered anything recently? Does this order number match the one I already have? Did the message come from a store you don’t usually order supplies from or a service you don’t use? If so, it’s probably a phishing attempt.
- Verify that the sender is actually from the company sending the message.
- Did you receive a message from someone you don’t recognize? Are they asking you to sign into a website to give Personally Identifiable Information (PII) such as credit card numbers, social security number, etc. A legitimate company will never ask for PII via instant message or email.
- Do not give out personal or company information over the internet.
- Do not click on unrecognized links or attachments. If you do proceed, verify that the URL is the correct one for the company/service and it has the proper security in place, such as HTTPS.
Many smishing messages pose a sense of urgency or even aggressiveness to prompt a form of intimidation. Any SMS requesting immediate action should be vetted thoroughly to determine whether or not it is a scam. Also, beware of messages that seek to tempt users into opening an attachment or visiting a link. For example, an attachment titled “Fix your account now” may draw the question “What is wrong with my account?” and prompt you to click a suspicious link.
In this week’s Cybersecurity Brief:
Google Launches AI Security Initiatives Including Bug Bounty Program and $10 Million AI Safety Fund
In a move to bolster the security of Artificial Intelligence (AI) technologies, Google has unveiled a series of initiatives that underscore its commitment to AI safety. These include an AI-specific vulnerability reporting program (VRP), a $10 million fund, and the introduction of a Secure AI Framework (SAIF).
One of the standout features of this announcement is the AI-Specific VRP, which promises rewards to security researchers identifying vulnerabilities in generative AI. These vulnerabilities could range from unfair biases and hallucinations to tampering with model behaviors. With increasing concerns about the misuse of generative AI, Google is keen to harness the expertise of the global research community to highlight and mitigate potential threats. Google’s expanded VRP focuses on both conventional security vulnerabilities and threats specific to AI-powered tools. The company stated, “Reward amounts are dependent on the severity of the attack scenario and the type of target affected.”
To tackle potential threats in the AI supply chain, Google introduced the Secure AI Framework (SAIF). This aims to fortify critical components within the machine learning supply chain, essential for building trustworthy AI applications. Google’s initial efforts under SAIF spotlight the model signing and attestation verification prototypes, leveraging tools like Sigstore and SLSA. These tools work in tandem to verify software identities, thereby enhancing supply chain resilience. Amid a surge in supply chain attacks, Google is intent on increasing transparency in the machine learning supply chain throughout its development lifecycle. Drawing parallels between traditional software and machine learning models, Google proposes adopting supply chain solutions in order to protect ML models. The Google Open Source Security Team (GOSST) will utilize SLSA and Sigstore to enhance the overall integrity of AI supply chains. This collaborative endeavor builds upon Google’s earlier alliance with the Open Source Security Foundation.
Additionally, in collaboration with industry giants Anthropic, Microsoft, and OpenAI, Google is setting up a $10 million AI Safety Fund. The fund aims to stimulate further research in AI safety, reflecting a collective commitment to ensuring the secure development and deployment of AI technologies. Below is a chart detailing Google’s scope on what constitutes a reward in their AI bug bounty program.
|Prompt Attacks||Crafting adversarial prompts that allow an adversary to influence the behavior of the model, and hence the output in ways that were not intended by the application.||In Scope|
|Prompt Attacks||Prompt injections that are invisible to victims and change the state of the victim’s account or any of their assets.||In Scope|
|Prompt Attacks||Prompt or preamble extraction in which a user is able to extract the initial prompt used to prime the model only when sensitive information is present in the extracted preamble.||In Scope|
|Prompt Attacks||Using a product to generate violative, misleading, or factually incorrect content in your own session: e.g., ‘jailbreaks’. This includes ‘hallucinations’ and factually inaccurate responses. Google’s generative AI products already have a dedicated reporting channel for these types of content issues.||Out of Scope|
|Training Data Extraction||Attacks that are able to successfully reconstruct verbatim training examples that contain sensitive information. Also called membership inference.||In Scope|
|Training Data Extraction||Extraction that reconstructs nonsensitive/public information.||Out of Scope|
|Manipulating Models||An attacker able to covertly change the behavior of a model such that they can trigger pre-defined adversarial behaviors.||In Scope (Only when a model’s output is used to change the state of a victim’s account or data.)|
|Manipulating Models||Attacks in which an attacker manipulates the training data of the model to influence the model’s output in a victim’s session according to the attacker’s preference.||In Scope (Only when a model’s output is used to change the state of a victim’s account or data.)|
|Adversarial Perturbation||Inputs that are provided to a model that results in a deterministic, but highly unexpected output from the model.||In Scope (In contexts where an adversary can reliably trigger a misclassification in a security control for malicious use or adversarial gain.)|
|Adversarial Perturbation||Contexts in which a model’s incorrect output or classification does not pose a compelling attack scenario or feasible path to Google or user harm.||Out of Scope|
|Model Theft / Exfiltration||Attacks in which the exact architecture or weights of a confidential/proprietary model are extracted.||In Scope|
|Model Theft / Exfiltration||Attacks in which the architecture and weights are not extracted precisely, or when they’re extracted from a non-confidential model.||Out of Scope|
|Other Issues||A bug or behavior that clearly meets our qualifications for a valid security or abuse issue.||In Scope|
|Other Issues||Using an AI product to do something potentially harmful that is already possible with other tools. For example, finding a vulnerability in open source software (already possible using publicly-available static analysis tools) and producing the answer to a harmful question when the answer is already available online.||Out of Scope|
|Other Issues||Issues that we already know about are not eligible for reward.||Out of Scope|
|Other Issues||Potential copyright issues: findings in which products return content appearing to be copyright-protected. Google’s generative AI products already have a dedicated reporting channel for these types of content issues.||Out of Scope|
To read more about this article, click here.
VMware Releases Patches for Critical vCenter Server RCE Vulnerability CVE-2023-34048
Recently, a highly critical vulnerability surfaced in VMware’s vCenter Server, a pivotal component in VMware’s vSphere suite, widely recognized for overseeing virtualized environments. This flaw, indexed as CVE-2023-34048, has garnered significant attention due to its severe implications and the inherent risks it presents.
This vulnerability revolves around an out-of-bounds write condition in vCenter Server’s implementation of the Distributed Computing Environment / Remote Procedure Calls (DCERPC) protocol. For those unfamiliar, DCERPC serves as a fundamental protocol for remote procedure call (RPC) systems, enabling inter-process communication.
To understand the magnitude of this flaw, one should note that if successfully exploited, it allows an attacker – even without authentication – to induce a remote code execution (RCE) scenario. This essentially hands over the complete reins of the affected system to the malicious actor. The vulnerability was given a CVSSv3 base score of 9.8 after being disclosed on October 25th.
The actual threat of this vulnerability lies mainly in its exploitation parameters, which shockingly require little to no effort to achieve:
- Authentication: Not required.
- Attack Complexity: Low.
- User Interaction: None.
This essentially means that attackers can execute the exploit remotely without necessitating any user interaction, making it a lucrative target for cybercriminals. Moreover, the vulnerability facilitates a pathway for a potential cascading attack, where an intruder can pivot from the compromised vCenter Server to other interconnected systems and in using this lateral movement amplifying the breach’s ramifications.
In response to the detection of this flaw, VMware demonstrated commendable proactiveness. The intriguing aspect of their response was the decision to roll out patches for multiple end-of-life products. It’s rare for companies to revisit phased-out versions, but given the exceptional threat this vulnerability poses, VMware deemed it essential to provide patches even for outdated versions. VMware’s advisory said patches have been issued for vCenter Server versions 6.7U3, 6.5U3, VCF 3.x, and vCenter Server 8.0U1.
Organizations and system administrators leveraging VMware’s products must be cognizant of the following action points:
- Prompt Patching: Considering the absence of viable workarounds, applying the security patches for the affected versions of vCenter Server and VMware Cloud Foundation becomes paramount.
- Network Vigilance: Heightened monitoring of network traffic is advised, with emphasis on the potential exploitation vectors like ports 2012/tcp, 2014/tcp, and 2020/tcp.
- Access Control: Implementing stringent access controls and firewall rules can significantly mitigate the risk of a potential breach.
- Continuous Monitoring: Ensure that the systems are monitored in real-time for any signs of breaches or unusual activities. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) should be up-to-date and operational.
- User Education: While this specific vulnerability doesn’t require user interaction for exploitation, cultivating a culture of security awareness can safeguard against other potential threats.
To read more about this article, click here.
How Can Netizen Help?
Netizen ensures that security gets built-in and not bolted-on. Providing advanced solutions to protect critical IT infrastructure such as the popular “CISO-as-a-Service” wherein companies can leverage the expertise of executive-level cybersecurity professionals without having to bear the cost of employing them full time.
We also offer compliance support, vulnerability assessments, penetration testing, and more security-related services for businesses of any size and type.
Additionally, Netizen offers an automated and affordable assessment tool that continuously scans systems, websites, applications, and networks to uncover issues. Vulnerability data is then securely analyzed and presented through an easy-to-interpret dashboard to yield actionable risk and compliance information for audiences ranging from IT professionals to executive managers.
Netizen is a CMMI V2.0 Level 3, ISO 9001:2015, and ISO 27001:2013 (Information Security Management) certified company. We are a proud Service-Disabled Veteran-Owned Small Business that is recognized by the U.S. Department of Labor for hiring and retention of military veterans.