AI Archives | Bugcrowd https://www.bugcrowd.com/blog/category/ai/ #1 Crowdsourced Cybersecurity Platform Thu, 18 Jan 2024 16:58:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 The Most Significant AI-related Risks in 2024 https://www.bugcrowd.com/blog/the-most-significant-ai-related-risks-in-2024/ Wed, 10 Jan 2024 13:47:03 +0000 https://live-bug-crowd.pantheonsite.io/?p=11845 This blog was originally posted on Medium.com by Matt Held, Technical Customer Success Manager at Bugcrowd.  AI changes the threat landscape, by a lot. In my daily life as a Cybersecurity Solutions Architect, I see countless vulnerabilities coming into Bug Bounties, Vulnerability Disclosure programs, Pen Tests and any other form of intake for security vulnerabilities. […]

The post The Most Significant AI-related Risks in 2024 appeared first on Bugcrowd.

]]>
This blog was originally posted on Medium.com by Matt Held, Technical Customer Success Manager at Bugcrowd. 

AI changes the threat landscape, by a lot.

In my daily life as a Cybersecurity Solutions Architect, I see countless vulnerabilities coming into Bug Bounties, Vulnerability Disclosure programs, Pen Tests and any other form of intake for security vulnerabilities. AI has made it easier for hackers to find and report these things, but these are not the risks that break the major bones in our very-digital society. It’s (mostly) not about vulnerabilities in applications or services, it’s about losing the human factor and trust in digital media and the potential of loosing any privacy. Here is my forecast regarding the most prevalent and concerning challenges arising from artificial intelligence in the year 2024 and it’s not all about cybersecurity.

1. Deepfakes and their Scams

Many of us have seen the Tom Hanks deepfake where his AI-clone promotes a dental health plan. Or the TikTok video falsely presenting MrBeast offering new iPhones for only $2.

Source

Deepfakes are now as easy to produce as ordering a logo online or getting takeout. There are a lot of sellers on popular gig-economy platforms, that will produce realistic enough looking deepfakes of anyone (Including yourself or any other person you have a picture, a voice or both of) for a few bucks.

Armed with that a threat actor can make any person including celebrities do anything. This ranges from promoting fake crypto-sites to altering history books. This is extremely critical, especially in a time, where we are rewriting history books for the better accuracy and not only from the winning party’s point of view.

Exhibit A: Below image took 1 minute to produce

Image composed by author with Midjourney.

Their rise in usage has more than doubled in 2023 compared to 2022. And according to a 2023 report by Onfido, a leading ID verification company, there has been a staggering 3000% increase in deepfake fraud attempts this year as well.

With image generators and refined models, one is already able to create realistic verification images for any type of online verification all while keeping the original identifiers of that person intact (looks, bio-metrics, lightning and more) and without looking generic.

Image created by Reddit user _harsh_ with Stable Diffusion| Source

Unsurprisingly the adult entertainment industry has rapidly embraced emerging technologies, leading to a troubling surge in the creation of non-consensual explicit content. Whatever you feed an AI it will create, much to the suffering of people not consenting to having their face and body turned into an adult film actor or actress. Tragically, this misuse extends to the creation of illegal and morally reprehensible material involving minors (CSAM). The ease with which AI can be used for such purposes highlights a significant and distressing issue with the unregulated usage of image generators.

On the more economic side of things, AI-Influencers are already replacing the human ones. No more influencer farms (a shame I know), as they might be too costly to operate, when you can achieve the same outcome and even more with just some servers with powerful GPUs.

While this might not be as concerning on the first glance, the ability to have a non-person with millions of followers stating fake-news or promoting crypto-scams, Multi-Level Marketing schemes is just a post away.

While the use of convincing deepfakes in financial scams is still uncommon to this date, it will surge drastically with the rapid evolvement of models trained. And it will be more difficult to distinguish what is real and what not really fast.

2. Automation of Scams (Deep Scams)

Let’s talk more about scams because there are so many variants of them. And so many flavors of how to emotionally manipulate people into giving away their hard-earned money.

Romance-scams, fake online-shops, property scams, pig butchering scams, the list goes on and on — every single one of those will be subject to mass-automation with AI.

Romance Scams

A realistic photographic evidence of a spontaneous snapshot taken with a smartphone turns out to be just another fake generated image.

Image created by Reddit user KudzuEye | Source

Tinder-Swindler style romance scams are skyrocketing and we can expect even more profiles of potential online-romances to be fully generated images and text messages.

Fake Online Shops

“Have you heard about the new Apple watch with holo-display? Yeah, pretty nice, and only $499 here on appIe.shop”

(notice the capitalized “i” instead of the l in apple”)

AI makes it easy to clone legit online shops, create ads of fake products, spamming social media and paid advertising services and rack in profits.

Image composed by author with Dall-E 3.

135% increase of fake online shops in October 2023 alone speaks volumes on the effortless creation of such sites.

Property (AirBnB, Booking-sites) scams

Looking for a nice getaway? Oh wow, right at the beach, and look, so cheap at the exact dates we’re looking for — let’s book quickly before it is gone!

AirBnB scams like this one and others are on the rise. When you arrive at your destination the property does not look like the images advertised or doesn’t even exist at all. The methods are easy to replicate over and over again and platforms are not catching up fast enough in identifying and removing those listings. Sometimes, they don’t even care.

Image composed by author with Dall-E

And it’s not all about creating realistic looking visuals, also text-based attacks too.

Let’s go back to the age-old “Nigerian-prince” or 419-scam. According to research done by Ronnie Tokazowski, the new-ish way of these scammers making bank is using BEC (Business Email Compromise) on companies or individuals.

For context: this usually involves using a lookalike domain (my-startup[dot]com instead of mystartup[dot]com) and the same email alias as someone high up in the company asking another person in the company to transfer money or gift cards on their behalf.

Which is a pretty and low level point of entry and can be scaled to an infinite amount of messages that all differ in language, wording, emotion and urgency with the help of Large Language Models (LLMs).

More distinguished threat actors could try matching the style and wording of the person that actor is impersonating, automate the domain registration, email alias creation, social media profile creation, mobile number registration and so on. Even the initial recon can be done by AI just by feeding it social profiles of the company or “About Us” pages, marketing emails, invoice gathering and much more. All until their model becomes a fully automated scam machine.

The inner workings of these so-called “pig butchering” scams are not only sinister and deeply disturbing. Reports suggest these operations are not only involved in regular and extreme criminal activities, but are also linked to severe acts of violence, including human sacrifices and armed conflicts.

In conclusion, the staggering efficiency that AI brings to any of these scams, properly 1000-folds it, only intensifies the situation for our society.

3. Privacy Erosion

George Orwell’s seminal dystopian novel “1984”’ was published seven decades ago, marking a milestone in literary history. The popular Netflix Series “Black Mirror” was released less than a decade. And it really doesn’t matter which one we take to compare the current state of AI Surveillance in the world — all under the cloak of security of the public. We have arrived at the crossroads and have already stared to take a step forward. In 2019 CARNEGIE reported a big surge of mass surveillance via AI technology.

Image courtesy of carnegieendowment.org | Source

If you like to see an interactive world map of AI Global Surveillance (AIGS) data, you can go here.

By 2025, it is projected that the digital data will expand to over 180 zettabytes (180.000.000 terrabytes). This immense amount collected, particularly in the realms of mass surveillance and profiling of internet users and people just walking down the street needs to be put into a context for whatever analysts want to use it for — for good I assume of course. AI technologies are set to play a crucial role in efficiently gathering and analyzing vast quantities of data. It is true, a trained AI model is faster, cheaper and often better than a human in:

  • Data collection (AI web scrapers automatically collect data from multiple online sources, handling unstructured data more effectively than traditional tools. It can also take thousands of video feeds to analyze all at once.)
  • Data mapping (With the right training data mapping via machine learning can be a very fast process and relationships between entities are becoming apparent very rapidly)
  • Data quality (Data collection and mapping are just the beginning; identifying errors and inconsistencies in large datasets is crucial. AI tools aid by detecting anomalies and estimating missing values accurately and much more efficient than any human)

But what are we talking about here? Tracking movements of terrorist and people that are dangers to society? Or are we just tracking everyone — just because we can?
Already the latter.

Everyone in society plays their part too. Alongside the issue of external mass surveillance, there is a growing trend of individuals voluntarily sharing extensive details of their lives on the internet. Historically, this information has been accessible to governments, data brokers, and advertising agencies.

Or people with the right capabilities knowing where to look. ”Fun” fact, this video is 10 years old.

However, with the advent of AI, the process of profiling, tracking, and predicting the behavior of anyone who actively shares things online has become significantly more efficient and sophisticated. George Orwell might have been astonished to learn that it’s not governments, but consumers themselves who are introducing surveillance devices into their homes. People are willingly purchasing these devices from tech companies, effectively paying to have their privacy compromised.

“Alexa, are you listening?”

Imagine funneling an array of all this data, dump it into a trained model that profiles a person based on purchase history, physical movements, social media posts, sleep patterns, intimate behaviors and more. This level of analysis could potentially reveal insights about a person that they might not even recognize in themselves. Now, amplify this process to encompass a mountain of data on countless individuals, monitored through AI surveillance, and you could map out societies. What was once possible for governments or Fortune500 companies, becomes available to the average user.

With ease, the concept of the “Glass Human” — an individual whose life is completely transparent and accessible — is closer to reality than ever before, representing a scenario where individuals are transparently visible and easily analyzed in great detail.

4. Automation of Malware and Cyberattacks

Every malware creator’s wet dream is the creation of self-replicating, auto-infecting, undetectable malicious code. While the automation part came true a long time ago, the latter is now a reality as well.

It’s not difficult to imagine that such technology may already be deployed at a scale by ransomware groups and Advanced Persistent Threats (APTs). With evasion techniques that is making recognition and detection of harmful code an impossible task for tradition defense methods. Meaning, we already have code that is replicating itself into a different form, with different evasion techniques on every machine it infects for a few months now.

Personally I believe that by the end of writing this article, we already have above deployed and working in the wild.

In the realm of cyber-attacks, the sophistication is increasing rapidly with the help of LLMs, but also the possibility to launch multiple attack-styles all at once to overwhelm defense teams is not a far-fetched scenario anymore.

Instead of trying to tediously find a single vulnerability to gain access into a company’s application or chain multiple together, threat actors can launch simultaneous attacks at once to see which one works best and leverage the human factor to gain access or control.

Picture a multifaceted cyber-attack scenario:
A company’s marketing website is hit by a Distributed Denial of Service (DDoS) attack. Simultaneously, engineers working to fix this issue face Business Email Compromise (BEC) attacks. While they grapple with these challenges, other employees are targeted with phishing attempts through LinkedIn. Adding to the chaos, these employees also receive a barrage of Multi-Factor Authentication (MFA) spamming requests. These requests are an attempt to exploit their credentials, which may have been compromised in data breaches or obtained through credential stuffing attacks.

All these attacks can be coordinated simply by inputting endpoints, breach-data and contact profiles into a trained AI model.

This strategy becomes particularly alarming when it targets vital software supply chains, essential companies, or critical infrastructure.

5. Less Secure Apps

Speaking from years of experience: I can attest to a common trait among us developers: a tendency towards seeking efficiency, often perceived as laziness *coughs*. It comes as no surprise that the introduction of AI and Copilot apps made tedious tasks a thing of the past. However, an over-reliance on these tools without fully understanding their outputs, or how they fit into the broader context and data flow, can lead to serious issues.

recent study revealed that users utilizing AI assistants tended to produce code that was less secure compared to those who didn’t use it. Notably, those with access to AI tools were more prone to overestimating the security of their code, aka blindly copy & pasting everything into their codebase.
The study also highlighted a different but crucial finding — users who were more skeptical of AI and actively engaged with it, by modifying their prompts or adjusting settings, generally produced code with fewer security vulnerabilities. This underscores the importance of a balanced approach to utilizing AI in coding, blending reliance with critical engagement. Simultaneously, these tools such as GitHub Copilot and Facebook InCoder have a tendency to instill a false sense of confidence among developers regarding the robustness of their code.

But what does that mean in an economy where Move fast and break things is still a common mantra among startups and established companies?
Yes exactly, we will end up with less secure products that have so many security issues, so many privacy flaws and an overall lack of care about users’ data.

In Summary

… we need to balance the scales!

We are at a crossroads in technological development, where the creations we bring to life must not only understand humanity but also align with its long-term interests and ethics (we also need better ethics to begin with). The stakes are incredibly high, as the potential benefits of these advancements could be the greatest we’ve ever seen, yet the risks are equally monumental.

Key to navigating this minefield is the preservation of privacy, which, with the right choices made today, can transition from being a historical concept to a fundamental, technologically and lawfully enshrined right.

To achieve this, there’s an urgent need for global collaboration among governments as well as economic and technological drivers. We need to form a comprehensive and inclusive council dedicated to overseeing AI ethics, laws, and regulations. This council must be diverserepresenting various societal and cultural backgrounds and must have 0 financial interest to ensure that the development and use of AI is not dominated by any single culture or economic power, thus maintaining a balanced and equitable approach to technological advancement.

Do you have thoughts or questions about this article? Reach out to Matt!

The post The Most Significant AI-related Risks in 2024 appeared first on Bugcrowd.

]]>
Defining and Prioritizing AI Vulnerabilities for Security Testing https://www.bugcrowd.com/blog/defining-and-prioritizing-ai-vulnerabilities-for-security-testing/ Tue, 19 Dec 2023 13:00:23 +0000 https://live-bug-crowd.pantheonsite.io/?p=11606 At Bugcrowd, we believe that the human ingenuity unleashed by crowdsourced security is the best tool available for meeting AI security goals in a scalable, impactful, and economically sensitive way. Just a few weeks ago, we announced incremental updates to the Vulnerability Rating Taxonomy (VRT), an ongoing effort to define and prioritize vulnerabilities in a […]

The post Defining and Prioritizing AI Vulnerabilities for Security Testing appeared first on Bugcrowd.

]]>
At Bugcrowd, we believe that the human ingenuity unleashed by crowdsourced security is the best tool available for meeting AI security goals in a scalable, impactful, and economically sensitive way.

Just a few weeks ago, we announced incremental updates to the Vulnerability Rating Taxonomy (VRT), an ongoing effort to define and prioritize vulnerabilities in a standard, community-driven way so that hackers and customers alike can participate in the process. Since 2016, the Bugcrowd Platform has incorporated the VRT alongside a CVSS conversion tool. This integration has enabled us to validate and triage hundreds of thousands of submissions from the crowd at scale. Our platform’s rigorous validation process ensures that these submissions are consistently recognized as valid vulnerabilities by program owners.

The VRT is designed to constantly evolve in order to mirror the current threat environment, and thus helps bug bounty program owners create economic incentive models that engage and motivate hackers to spend their valuable time looking for the right things with the expectation of a fair reward. Now, with the mainstreaming of generative AI and the appearance of government regulation including Executive Order 14410 and the EU Artificial Intelligence Act, it’s time for the VRT to take another evolutionary step to account for vulns in AI systems, particularly in Large Language Models (LLMs).

With these AI security-related updates to the VRT (and more to come) and our experience working with AI leaders like OpenAI, Anthropic, Google, the U.S. Department of Defense’s Chief Digital and Artificial Intelligence Office, and the Office of the National Cyber Director, the Bugcrowd Platform is positioned as the leading option for meeting that goal.

Bringing AI security into the ecosystem

Although AI systems can have well-known vulnerabilities that Bugcrowd sees in common web applications (such as IDOR and Broken Access Control vulns), AI technologies like LLMs also introduce unprecedented security challenges that our industry is only beginning to understand and document, just as we had to contend with new classes of vulnerabilities introduced by mobile technology, cloud computing, and APIs.

For that reason, our announcement today of VRT version 1.12 is a milestone in the crowdsourced cybersecurity industry: For the first time, customers and hackers will have a shared understanding of how the most likely emerging LLM-related vulnerabilities are defined and should be prioritized for reward and remediation. With this information, hackers can focus on hunting for specific vulns and creating targeted POCs, and program owners with LLM-related assets can design scope and rewards that produce the best outcomes.

In the interest of alignment with industry-standard definitions, the updates below overlap with the OWASP Top 10 for Large Language Model Applications. Special thanks to Ads Dawson, a senior security engineer for LLM platform provider Cohere and a core team member of the OWASP LLM Top 10 project, for inspiring these updates and his contributions to VRT v1.12!

What’s inside VRT v1.12?

Update to existing category:

  • Varies: Application Level DoS > Excessive Resource Consumption – Injection (Prompt)
    In the context of LLMs, application-level DoS attacks take the form of engineered prompts that can crash the client or otherwise make it unusable by others. When the LLM is integrated with other systems, the damage can also spread beyond the application.

New “AI Application Security” category and “Large Language Model (LLM) Security” subcategory:

  • P1: AI Application Security > Large Language Model (LLM) Security > Prompt Injection
    In “Prompt Injection”, an attacker manipulates the prompt in a way that causes the LLM to behave maliciously – such as by jailbreaking via “Do Anything Now” (DAN), developer mode, and roleplaying.
  • P1: AI Application Security > Large Language Model (LLM) Security > LLM Output Handling
    As LLMs become more common, there is a risk that LLM output will be accepted unconditionally by other applications. This can introduce vulnerabilities that invite Cross-Site Scripting (XSS) attacks, privilege escalation, and more.
  • P1: AI Application Security > Large Language Model (LLM) Security > Training Data Poisoning
    In Data (or Model) Poisoning attacks, a threat actor gets their input or prompts to influence the model for nefarious purposes. Model Skewing–in which the attacker attempts to pollute training data to confuse the model about what is “good” or “bad” – is among the most common Data Poisoning techniques.
  • P2: AI Application Security > Large Language Model (LLM) Security > Excessive Agency/Permission Manipulation
    Excessive Agency flaws are ones in which an LLM has more functionality, permissions, or autonomy than is intended, enabling an attacker to manipulate it to reveal sensitive data (including its own source code) or do other unexpected tasks. When the model is integrated with other systems, those flaws can have particularly dangerous consequences such as privilege escalation. (Note: Data leakage, which we expect to become a common LLM vulnerability in itself, is accounted for in the VRT’s existing “Disclosure of Secrets” category.)

According to Ads Dawson, “The main intention of this VRT update is to capture the correlation between LLM/AI-based vulnerabilities inline and application-based taxonomies – because one associated risk within each realm can trigger a downstream exploit, and vice versa. This not only opens up a new form of offensive security research and red teaming to program participants, but helps companies increase their scope to include these additional attack vectors inline with security researcher testing, receiving submissions, and introducing mitigations to secure their applications. I am looking forward to seeing how this VRT release will influence researchers and companies looking to fortify their defenses against these newly introduced attack concepts.”

Contributions needed!

This update represents our first step in recognizing these attack vectors within the VRT, but is far from the last–the VRT and these categories will evolve over time as hackers, Bugcrowd application security engineers, and customers actively participate in the process. If you would like to contribute to the VRT, Issues and Pull Requests are most welcome!

The post Defining and Prioritizing AI Vulnerabilities for Security Testing appeared first on Bugcrowd.

]]>