Thought Leadership Archives | Bugcrowd https://www.bugcrowd.com/blog/category/thought-leadership/ #1 Crowdsourced Cybersecurity Platform Wed, 24 Jan 2024 21:30:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 What is Offensive Security? https://www.bugcrowd.com/blog/what-is-offensive-security/ Thu, 25 Jan 2024 14:00:47 +0000 https://live-bug-crowd.pantheonsite.io/?p=11970 When you hear about offensive work in security, it may conjure up images of malware, malicious actors, and mischief. But offensive security is also an important component in protecting your digital assets by proactively putting your security controls to the test. In a world of rapidly evolving landscapes and threats, offensive security provides a practical […]

The post What is Offensive Security? appeared first on Bugcrowd.

]]>
When you hear about offensive work in security, it may conjure up images of malware, malicious actors, and mischief. But offensive security is also an important component in protecting your digital assets by proactively putting your security controls to the test. In a world of rapidly evolving landscapes and threats, offensive security provides a practical way to test new concepts and ideas in a safe setting, gathering data on vulnerabilities and weaknesses that can improve your defenses and demonstrate your security posture. 

Offensive Security Defined 

In simple terms, offensive security involves testing an organization’s defenses by conducting simulated attacks to identify any weaknesses that can be exploited. The goal of offensive security is to discover vulnerabilities before malicious actors can exploit them, and to make necessary adjustments to improve security. It’s a proactive approach to cybersecurity that complements defensive measures like firewalls, antivirus software, and intrusion detection systems.

Offensive Security vs Defensive Security 

Defensive security is a reactive approach that focuses on securing an organization against potential threats. It often relies on securing the perimeter and applying established best practice in tactical areas like security hygiene, data handling, or access controls, as well as strategic considerations such as defense in depth or zero trust.

Offensive security is a proactive approach that turns theory into practice. It means looking at security as a problem to be solved, rather than an abstraction. The goal of offensive security is to actively identify and fix vulnerabilities before they can be exploited, often by applying creativity to an organization’s specific assets, practices, and subjective posture.

Both approaches are essential for a comprehensive cybersecurity strategy. However, the main difference is that defensive security is focused on preventing attacks, while offensive security is focused on finding and fixing vulnerabilities.

Types of Offensive Security

There are a number of approaches to offensive security: one common theme is that while they are supported by automation, human hackers play a crucial role. 

  • Penetration TestingPenetration testing, or pen testing, simulates an attack on an organization’s systems and networks, where experts find vulnerabilities by testing, often in line with defined methodologies. You also have the option of more sophisticated pen testing delivered as penetration testing as a service (PTaaS).
  • Red Teaming—Red teaming began as a military exercise in the 1960s, where a group representing the Soviet Union would act as the “red team” in simulations against the US “blue team.” Similar to pen testing, red teams act as attackers to discover and exploit security vulnerabilities and weaknesses. Where pen testing typically takes place for a set time and draws more from methodologies and meeting compliance standards, red teaming is more flexible and tests processes and people as well as technology.
  • Blue Teaming—Blue teams are the counterpoints to red teams, seeking to foil attacks during security exercises by playing defense. They focus on modeling threats and preventing incidents, as well as responding to incidents and countering attacks by the red team.
  • Purple Teaming—Purple teams combine red teaming and blue teaming by breaking down silos and emphasizing communication while addressing security challenges. They focus on enabling collaboration between the two groups and synthesizing their skills and experiences. For more on applying the latest team colors to cybersecurity, take a look at our blog on the topic
  • Social EngineeringSocial engineering involves manipulating individuals or groups within an organization to gain access to sensitive information. It can target the whole range of human emotions and biases, and tactics might include sending texts or emails claiming to be from a trustworthy entity to acquire sensitive data (phishing), or even using recordings of crying babies to emotionally manipulate employees into sharing sensitive data.
  • Managed Bug Bounty ProgramsBug Bounty Programs are initiatives that incentivize hackers to test digital assets and find vulnerabilities in exchange for financial rewards. By making these rewards proportionate to the criticality of the bugs submitted, they offer clear ROI to buyers while tapping into hackers’ offensive impulses in order to improve security.
  • Vulnerability Scanning and Management—Vulnerability scanning involves using automated tools to scan an organization’s systems and networks to find vulnerabilities. While these scans are a useful component in the offensive security toolkit, they need further expert input in order to interpret results and ultimately resolve vulnerabilities.
  • Attack Surface ManagementAttack Surface Management (ASM) is the process of defining and cataloging an organization’s entire IT footprint, then rapidly identifying and prioritizing risks deriving from these assets. Because the vast majority of organizations’ assets are growing, it is often possible for shadow IT to grow with it, and ASM pairs advanced scanning software with recon experts to find every asset and deal with their associated risks.

 

What are the Benefits of Offensive Security? 

  • Separates theory from practice

Traditional security focuses on best practice and builds defenses based on presumptions and expectations of how malicious actors would behave. Investing in offensive security is an opportunity to put these theories to the test and see how defensive measures hold up against active tests. It’s a way of stress testing any assumptions baked into your security, and potentially finding blind spots or gaps.

  • Draws from established methodologies as well as the latest techniques

Offensive security covers the application of tried and tested methodologies, particularly in pen testing, as well as tapping into the latest innovations from emerging technologies and associated techniques. Bringing this range of knowledge to bear amounts to a comprehensive test of your security that can provide clear recommendations on handling vulnerabilities, as well as a confident assessment of your security posture.

  • Ensures compliance in industries that require testing

Certain industries, such as financial services and defense, have high standards of regulation, which includes security. Companies operating in these sectors often need to demonstrate the use of offensive security such as pen tests to meet these requirements and assure regulators that security standards are being met, avoiding associated penalties.

  • Provides rapid feedback on security posture and ROI

Security can be hard to define and benchmark, with Knightian uncertainty common and some risks difficult to quantify. Offensive security can offer quick feedback through testing, as well as providing clear ROI from spending on “pay for results” investments such as bug bounty programs.

  • Creates a strong security brand by publicly following best practice

Some great minds have considered how to define security: it is a process rather than a destination, an emergent property rather than a characteristic. But we also believe that high quality security means engaging with the security community, which includes these thinkers as well as the hackers, testers, developers, and more. Investing in offensive security is a way to engage directly with some members of this community, but also a way to build a brand for taking security seriously. Like any brand this helps your relationship with stakeholders, including regulators, employees, prospective hires, and customers.

Offensive Security Frameworks 

Offensive security frameworks are methodologies that security professionals use to understand the tactics, techniques, and procedures (TTP) of cyber adversaries. These frameworks provide a structured approach to identify vulnerabilities, simulate real-world attacks, and develop strategies to mitigate potential threats. All frameworks provide valuable insights into attacker behavior, and they should be used together for the most comprehensive understanding of offensive security.

Three of the most widely recognized offensive security frameworks are the MITRE ATT&CK, Lockheed Martin Cyber Kill Chain, and the Mandiant Attack Lifecycle.

  • MITRE ATT&CKA globally accessible knowledge base of adversary tactics and techniques based on real world observations. This framework describes the actions that an attacker may take after gaining access to a system or network, and is divided into a series of matrices focusing on different environments. MITRE regularly updates the framework with new findings from cybersecurity research.
  • Mandiant attack lifecycleAlso known as the Cyber Attack Lifecycle, this framework lays out the stages of an attack from the adversaries’ perspective. By understanding each stage, defenders can identify weak points in their security posture and implement necessary controls to prevent or disrupt attacks.
  • Cyber Kill ChainDeveloped by Lockheed Martin, this framework also provides a seven-part blueprint for the stages of an attack. By understanding the sequence of events in an attack, this helps organizations to implement appropriate countermeasures at each stage and ensures a comprehensive approach to security.

Offensive Security Tools

There are too many tools to cover in one post, but the below list includes some of the trusty hacker aids that are commonly used. It’s worth noting that many are open source, and ingenuity is more important to hackers than proprietary investments. 

  • Sliver—Somewhere between a tool and a framework, Sliver is a highly configurable, open-source approach for post-exploitation use. Designed by Bishop Fox, it offers red teams capabilities such as defense evasion and privilege escalation.
  • Metasploit—This robust open-source platform is beloved by hackers and used for developing, testing and executing exploit code against remote machines. Its modular architecture allows users to create custom modules, and its influence on offensive security even led one blogger to coin HD Moore’s Law that “Casual Attacker power grows at the rate of Metasploit”.
  • Burp Suite—A comprehensive testing tool for web applications that enables testers to identify vulnerabilities in web applications. Burp Suite has a wide range of features and is considered to have a user-friendly interface.
  • Nmap—This open-source utility for network discovery is used for port scanning and network exploration. As well as having a flexible feature set that allows it to be used by network administrators and red teams alike, it is also Hollywood’s favorite security tool and has had cameos in The Matrix Reloaded, Oceans 8 and Die Hard 4.
  • Sn1per—An open-source tool for automating vulnerability scanning and penetration testing. This can automate fingerprinting, Google hacking and even brute-forcing.
  • Cobalt Strike—Emulates tactics and techniques associated with quiet, long-term threat actors embedded in a network. This provides a range of capabilities including reconnaissance, delivering payloads and establishing command and control channels.
  • ZAP—This open-source web application scanner was developed by the Open Web Application Security Project (OWASP) and is the world’s most widely-used security scanner. It acts as a proxy, capturing data in motion and determining how the application responds to possibly malicious requests.

TL;DR–Offensive Security

It is hard to know how good a new car is until you have taken it for a ride. Offensive security is the practical, hands-on approach to ensuring that the steps you are taking to protect your organization are paying off, and to finding any gaps or oversights across your estate. It tests your assets, your tools, your processes, and even your people. While it is necessary for compliance in some sectors, the main benefit is the practical benefits of knowing how your defenses stack up against attackers.

Investing in offensive security is a way of getting skin in the game and having an accurate assessment of security posture. The best way to start is by investing in crowdsourced security testing: this allows you to access offensive security while only paying for results. To see how Bugcrowd can help, take a 5-minute tour of the platform. 

The post What is Offensive Security? appeared first on Bugcrowd.

]]>
Inside the Platform: Bugcrowd’s Vulnerability Trends Report https://www.bugcrowd.com/blog/inside-the-platform-bugcrowds-vulnerability-trends-report/ Wed, 24 Jan 2024 13:50:53 +0000 https://live-bug-crowd.pantheonsite.io/?p=11945 We’re three weeks into January, which means we’ve hit the time of the year when New Year’s resolutions have inevitably been forgotten, Dry January has been abandoned, and we’re all just trying our best to get through the rest of winter in one piece. But what if we told you that we have a surprise […]

The post Inside the Platform: Bugcrowd’s Vulnerability Trends Report appeared first on Bugcrowd.

]]>
We’re three weeks into January, which means we’ve hit the time of the year when New Year’s resolutions have inevitably been forgotten, Dry January has been abandoned, and we’re all just trying our best to get through the rest of winter in one piece. But what if we told you that we have a surprise that might make your January a little less dreary and might even help you achieve your New Year’s cybersecurity resolutions? 

We’re absolutely ecstatic to release our flagship annual report: Inside the Platform: Bugcrowd’s Vulnerability Trends Report. You may remember this piece based on its previous name: Priority One. 

What is Inside the Platform?

Inside the Platform is a magazine-style piece that features an analysis of all the crowdsourced security vulnerability submissions handled through the Bugcrowd Platform in 2023. The report leverages these data to offer trends and insights for CISOs and security leaders. 

Specifically, the report looks at vulnerability submission data from every possible angle to attempt to predict the future of cybersecurity. In writing this report, we examined overall submissions, critical submissions, payout data, notable targets, VRT categories, and public vs. private programs. We also broke down the data into six key industry categories. Using this analysis, we forecasted trends and made recommendations on what levers to pull in a crowdsourced security program to achieve success. 

The report also includes qualitative interviews with Bugcrowd customers, thought pieces on the value of an open scope program and how different hacker roles contribute to crowdsourced security, social media spotlights, legal work being done to make hacking safer, and more. 

Key takeaways from Inside the Platform

The 12 articles that Inside the Platform are composed of are jam-packed with data, but here are five highlights:

  1. Higher Rewards—The most successful programs were those that offered higher rewards (e.g., $10,000 or more for P1 vulnerabilities).
  2. Open Scope—Programs with open scopes saw 10x more P1 vulnerability submissions than those with limited scopes. 
  3. Vulnerability Submissions by Industry—The government sector experienced a 151% increase in vulnerability submissions and a 58% increase in the number of P1s rewarded in 2023 compared to 2022. 
  4. P1 Payouts by Industry—The financial services industry and government sector offered the highest median payouts for P1 vulnerabilities ($10,000 and $5,000, respectively). 
  5. AI—A new AI-related category was added to Bugcrowd’s Vulnerability Rating Taxonomy (VRT). This addition reflects the profound influence that AI has had and will have on the threat environment and the ways that hackers, customers, and the Bugcrowd triage team view certain vulnerability classes and their relative impacts. 

Where to find more information

The report is live! Keep an eye on our social media for breakdowns of the report from experts at Bugcrowd, plus a webinar later next month. 

 

The post Inside the Platform: Bugcrowd’s Vulnerability Trends Report appeared first on Bugcrowd.

]]>
The Most Significant AI-related Risks in 2024 https://www.bugcrowd.com/blog/the-most-significant-ai-related-risks-in-2024/ Wed, 10 Jan 2024 13:47:03 +0000 https://live-bug-crowd.pantheonsite.io/?p=11845 This blog was originally posted on Medium.com by Matt Held, Technical Customer Success Manager at Bugcrowd.  AI changes the threat landscape, by a lot. In my daily life as a Cybersecurity Solutions Architect, I see countless vulnerabilities coming into Bug Bounties, Vulnerability Disclosure programs, Pen Tests and any other form of intake for security vulnerabilities. […]

The post The Most Significant AI-related Risks in 2024 appeared first on Bugcrowd.

]]>
This blog was originally posted on Medium.com by Matt Held, Technical Customer Success Manager at Bugcrowd. 

AI changes the threat landscape, by a lot.

In my daily life as a Cybersecurity Solutions Architect, I see countless vulnerabilities coming into Bug Bounties, Vulnerability Disclosure programs, Pen Tests and any other form of intake for security vulnerabilities. AI has made it easier for hackers to find and report these things, but these are not the risks that break the major bones in our very-digital society. It’s (mostly) not about vulnerabilities in applications or services, it’s about losing the human factor and trust in digital media and the potential of loosing any privacy. Here is my forecast regarding the most prevalent and concerning challenges arising from artificial intelligence in the year 2024 and it’s not all about cybersecurity.

1. Deepfakes and their Scams

Many of us have seen the Tom Hanks deepfake where his AI-clone promotes a dental health plan. Or the TikTok video falsely presenting MrBeast offering new iPhones for only $2.

Source

Deepfakes are now as easy to produce as ordering a logo online or getting takeout. There are a lot of sellers on popular gig-economy platforms, that will produce realistic enough looking deepfakes of anyone (Including yourself or any other person you have a picture, a voice or both of) for a few bucks.

Armed with that a threat actor can make any person including celebrities do anything. This ranges from promoting fake crypto-sites to altering history books. This is extremely critical, especially in a time, where we are rewriting history books for the better accuracy and not only from the winning party’s point of view.

Exhibit A: Below image took 1 minute to produce

Image composed by author with Midjourney.

Their rise in usage has more than doubled in 2023 compared to 2022. And according to a 2023 report by Onfido, a leading ID verification company, there has been a staggering 3000% increase in deepfake fraud attempts this year as well.

With image generators and refined models, one is already able to create realistic verification images for any type of online verification all while keeping the original identifiers of that person intact (looks, bio-metrics, lightning and more) and without looking generic.

Image created by Reddit user _harsh_ with Stable Diffusion| Source

Unsurprisingly the adult entertainment industry has rapidly embraced emerging technologies, leading to a troubling surge in the creation of non-consensual explicit content. Whatever you feed an AI it will create, much to the suffering of people not consenting to having their face and body turned into an adult film actor or actress. Tragically, this misuse extends to the creation of illegal and morally reprehensible material involving minors (CSAM). The ease with which AI can be used for such purposes highlights a significant and distressing issue with the unregulated usage of image generators.

On the more economic side of things, AI-Influencers are already replacing the human ones. No more influencer farms (a shame I know), as they might be too costly to operate, when you can achieve the same outcome and even more with just some servers with powerful GPUs.

While this might not be as concerning on the first glance, the ability to have a non-person with millions of followers stating fake-news or promoting crypto-scams, Multi-Level Marketing schemes is just a post away.

While the use of convincing deepfakes in financial scams is still uncommon to this date, it will surge drastically with the rapid evolvement of models trained. And it will be more difficult to distinguish what is real and what not really fast.

2. Automation of Scams (Deep Scams)

Let’s talk more about scams because there are so many variants of them. And so many flavors of how to emotionally manipulate people into giving away their hard-earned money.

Romance-scams, fake online-shops, property scams, pig butchering scams, the list goes on and on — every single one of those will be subject to mass-automation with AI.

Romance Scams

A realistic photographic evidence of a spontaneous snapshot taken with a smartphone turns out to be just another fake generated image.

Image created by Reddit user KudzuEye | Source

Tinder-Swindler style romance scams are skyrocketing and we can expect even more profiles of potential online-romances to be fully generated images and text messages.

Fake Online Shops

“Have you heard about the new Apple watch with holo-display? Yeah, pretty nice, and only $499 here on appIe.shop”

(notice the capitalized “i” instead of the l in apple”)

AI makes it easy to clone legit online shops, create ads of fake products, spamming social media and paid advertising services and rack in profits.

Image composed by author with Dall-E 3.

135% increase of fake online shops in October 2023 alone speaks volumes on the effortless creation of such sites.

Property (AirBnB, Booking-sites) scams

Looking for a nice getaway? Oh wow, right at the beach, and look, so cheap at the exact dates we’re looking for — let’s book quickly before it is gone!

AirBnB scams like this one and others are on the rise. When you arrive at your destination the property does not look like the images advertised or doesn’t even exist at all. The methods are easy to replicate over and over again and platforms are not catching up fast enough in identifying and removing those listings. Sometimes, they don’t even care.

Image composed by author with Dall-E

And it’s not all about creating realistic looking visuals, also text-based attacks too.

Let’s go back to the age-old “Nigerian-prince” or 419-scam. According to research done by Ronnie Tokazowski, the new-ish way of these scammers making bank is using BEC (Business Email Compromise) on companies or individuals.

For context: this usually involves using a lookalike domain (my-startup[dot]com instead of mystartup[dot]com) and the same email alias as someone high up in the company asking another person in the company to transfer money or gift cards on their behalf.

Which is a pretty and low level point of entry and can be scaled to an infinite amount of messages that all differ in language, wording, emotion and urgency with the help of Large Language Models (LLMs).

More distinguished threat actors could try matching the style and wording of the person that actor is impersonating, automate the domain registration, email alias creation, social media profile creation, mobile number registration and so on. Even the initial recon can be done by AI just by feeding it social profiles of the company or “About Us” pages, marketing emails, invoice gathering and much more. All until their model becomes a fully automated scam machine.

The inner workings of these so-called “pig butchering” scams are not only sinister and deeply disturbing. Reports suggest these operations are not only involved in regular and extreme criminal activities, but are also linked to severe acts of violence, including human sacrifices and armed conflicts.

In conclusion, the staggering efficiency that AI brings to any of these scams, properly 1000-folds it, only intensifies the situation for our society.

3. Privacy Erosion

George Orwell’s seminal dystopian novel “1984”’ was published seven decades ago, marking a milestone in literary history. The popular Netflix Series “Black Mirror” was released less than a decade. And it really doesn’t matter which one we take to compare the current state of AI Surveillance in the world — all under the cloak of security of the public. We have arrived at the crossroads and have already stared to take a step forward. In 2019 CARNEGIE reported a big surge of mass surveillance via AI technology.

Image courtesy of carnegieendowment.org | Source

If you like to see an interactive world map of AI Global Surveillance (AIGS) data, you can go here.

By 2025, it is projected that the digital data will expand to over 180 zettabytes (180.000.000 terrabytes). This immense amount collected, particularly in the realms of mass surveillance and profiling of internet users and people just walking down the street needs to be put into a context for whatever analysts want to use it for — for good I assume of course. AI technologies are set to play a crucial role in efficiently gathering and analyzing vast quantities of data. It is true, a trained AI model is faster, cheaper and often better than a human in:

  • Data collection (AI web scrapers automatically collect data from multiple online sources, handling unstructured data more effectively than traditional tools. It can also take thousands of video feeds to analyze all at once.)
  • Data mapping (With the right training data mapping via machine learning can be a very fast process and relationships between entities are becoming apparent very rapidly)
  • Data quality (Data collection and mapping are just the beginning; identifying errors and inconsistencies in large datasets is crucial. AI tools aid by detecting anomalies and estimating missing values accurately and much more efficient than any human)

But what are we talking about here? Tracking movements of terrorist and people that are dangers to society? Or are we just tracking everyone — just because we can?
Already the latter.

Everyone in society plays their part too. Alongside the issue of external mass surveillance, there is a growing trend of individuals voluntarily sharing extensive details of their lives on the internet. Historically, this information has been accessible to governments, data brokers, and advertising agencies.

Or people with the right capabilities knowing where to look. ”Fun” fact, this video is 10 years old.

However, with the advent of AI, the process of profiling, tracking, and predicting the behavior of anyone who actively shares things online has become significantly more efficient and sophisticated. George Orwell might have been astonished to learn that it’s not governments, but consumers themselves who are introducing surveillance devices into their homes. People are willingly purchasing these devices from tech companies, effectively paying to have their privacy compromised.

“Alexa, are you listening?”

Imagine funneling an array of all this data, dump it into a trained model that profiles a person based on purchase history, physical movements, social media posts, sleep patterns, intimate behaviors and more. This level of analysis could potentially reveal insights about a person that they might not even recognize in themselves. Now, amplify this process to encompass a mountain of data on countless individuals, monitored through AI surveillance, and you could map out societies. What was once possible for governments or Fortune500 companies, becomes available to the average user.

With ease, the concept of the “Glass Human” — an individual whose life is completely transparent and accessible — is closer to reality than ever before, representing a scenario where individuals are transparently visible and easily analyzed in great detail.

4. Automation of Malware and Cyberattacks

Every malware creator’s wet dream is the creation of self-replicating, auto-infecting, undetectable malicious code. While the automation part came true a long time ago, the latter is now a reality as well.

It’s not difficult to imagine that such technology may already be deployed at a scale by ransomware groups and Advanced Persistent Threats (APTs). With evasion techniques that is making recognition and detection of harmful code an impossible task for tradition defense methods. Meaning, we already have code that is replicating itself into a different form, with different evasion techniques on every machine it infects for a few months now.

Personally I believe that by the end of writing this article, we already have above deployed and working in the wild.

In the realm of cyber-attacks, the sophistication is increasing rapidly with the help of LLMs, but also the possibility to launch multiple attack-styles all at once to overwhelm defense teams is not a far-fetched scenario anymore.

Instead of trying to tediously find a single vulnerability to gain access into a company’s application or chain multiple together, threat actors can launch simultaneous attacks at once to see which one works best and leverage the human factor to gain access or control.

Picture a multifaceted cyber-attack scenario:
A company’s marketing website is hit by a Distributed Denial of Service (DDoS) attack. Simultaneously, engineers working to fix this issue face Business Email Compromise (BEC) attacks. While they grapple with these challenges, other employees are targeted with phishing attempts through LinkedIn. Adding to the chaos, these employees also receive a barrage of Multi-Factor Authentication (MFA) spamming requests. These requests are an attempt to exploit their credentials, which may have been compromised in data breaches or obtained through credential stuffing attacks.

All these attacks can be coordinated simply by inputting endpoints, breach-data and contact profiles into a trained AI model.

This strategy becomes particularly alarming when it targets vital software supply chains, essential companies, or critical infrastructure.

5. Less Secure Apps

Speaking from years of experience: I can attest to a common trait among us developers: a tendency towards seeking efficiency, often perceived as laziness *coughs*. It comes as no surprise that the introduction of AI and Copilot apps made tedious tasks a thing of the past. However, an over-reliance on these tools without fully understanding their outputs, or how they fit into the broader context and data flow, can lead to serious issues.

recent study revealed that users utilizing AI assistants tended to produce code that was less secure compared to those who didn’t use it. Notably, those with access to AI tools were more prone to overestimating the security of their code, aka blindly copy & pasting everything into their codebase.
The study also highlighted a different but crucial finding — users who were more skeptical of AI and actively engaged with it, by modifying their prompts or adjusting settings, generally produced code with fewer security vulnerabilities. This underscores the importance of a balanced approach to utilizing AI in coding, blending reliance with critical engagement. Simultaneously, these tools such as GitHub Copilot and Facebook InCoder have a tendency to instill a false sense of confidence among developers regarding the robustness of their code.

But what does that mean in an economy where Move fast and break things is still a common mantra among startups and established companies?
Yes exactly, we will end up with less secure products that have so many security issues, so many privacy flaws and an overall lack of care about users’ data.

In Summary

… we need to balance the scales!

We are at a crossroads in technological development, where the creations we bring to life must not only understand humanity but also align with its long-term interests and ethics (we also need better ethics to begin with). The stakes are incredibly high, as the potential benefits of these advancements could be the greatest we’ve ever seen, yet the risks are equally monumental.

Key to navigating this minefield is the preservation of privacy, which, with the right choices made today, can transition from being a historical concept to a fundamental, technologically and lawfully enshrined right.

To achieve this, there’s an urgent need for global collaboration among governments as well as economic and technological drivers. We need to form a comprehensive and inclusive council dedicated to overseeing AI ethics, laws, and regulations. This council must be diverserepresenting various societal and cultural backgrounds and must have 0 financial interest to ensure that the development and use of AI is not dominated by any single culture or economic power, thus maintaining a balanced and equitable approach to technological advancement.

Do you have thoughts or questions about this article? Reach out to Matt!

The post The Most Significant AI-related Risks in 2024 appeared first on Bugcrowd.

]]>
2024 Cybersecurity Trends and Predictions https://www.bugcrowd.com/blog/2024-cybersecurity-trends-and-predictions/ Wed, 27 Dec 2023 18:32:21 +0000 https://live-bug-crowd.pantheonsite.io/?p=11773 As 2023 draws to a close, we’re looking ahead to a new year. This past year brought on a set of cybersecurity challenges that we expect to continue in 2024. As the growing conflicts between Israel and Hamas and Russia and Ukraine continue, there will be new risks from global threat actors. This will require […]

The post 2024 Cybersecurity Trends and Predictions appeared first on Bugcrowd.

]]>
As 2023 draws to a close, we’re looking ahead to a new year. This past year brought on a set of cybersecurity challenges that we expect to continue in 2024.

As the growing conflicts between Israel and Hamas and Russia and Ukraine continue, there will be new risks from global threat actors. This will require preparedness on both sides for new asymmetric threats. These expanding threats in a volatile, noisy environment will be difficult to predict. I recommend security leaders insert the crowdsourced hacker mindset into their decision making to show how to prepare for the chaos coming when the threat actors do try to monkey with IT systems.

We can also expect the bar to lower for attackers, largely due to the availability of generative AI tools. In the past, knowledge was a barrier to entry for the attackers to get big outcomes. Now, generative AI has given them access to a lot of new tools and it has broadened the potential threat group. 

In using AI for defense, the challenge comes because prioritization is usually defined by the business leaders, not by the security practitioners. What we security folks feel is most urgent sometimes does not align with the company priorities, which creates a risk to the organization. Seen through that lens, our work around AI is to surface insights from the overall data set as it relates to risk. A vulnerability on its own is not good, but a vulnerability plus a real threat now makes it urgent. 

The Bugcrowd team compiled a few of my predictions for cybersecurity in 2024 into this handy infographic. I’d love to hear what is top of mind for you and your security team in the new year.

 

Casey Speaks: 2024 Security Trends and Predictions

The post 2024 Cybersecurity Trends and Predictions appeared first on Bugcrowd.

]]>
Vulnerability Disclosure Policy: What is It & Why is it Important? https://www.bugcrowd.com/blog/vulnerability-disclosure-policy-what-is-it-why-is-it-important/ Fri, 15 Dec 2023 08:00:00 +0000 https://www.bugcrowd.com/vulnerability-disclosure-policy-what-is-it-why-is-it-important/ A vulnerability disclosure policy sets the rules of engagement for an ethical hacker to identify and submit information on security vulnerabilities. Vulnerability disclosure policies establish the communications framework for the report of discovered security weaknesses and vulnerabilities. This enables all parties to exchange data in a formal and consistent way and confirm receipt of the […]

The post Vulnerability Disclosure Policy: What is It & Why is it Important? appeared first on Bugcrowd.

]]>
A vulnerability disclosure policy sets the rules of engagement for an ethical hacker to identify and submit information on security vulnerabilities. Vulnerability disclosure policies establish the communications framework for the report of discovered security weaknesses and vulnerabilities. This enables all parties to exchange data in a formal and consistent way and confirm receipt of the communications.

Ethical hackers can help organizations improve the security of their networks, systems, and applications. In order to do this, ethical hackers are retained on contracts for outsourced traditional penetration testing, or, the new and more rapidly growing model for crowdsourced security penetration testing. In many instances ethical hackers will identify vulnerabilities based upon goodwill and without the expectation of remuneration for their services.

A vulnerability is a “weakness in an information system, system security procedure, internal control, or implementation that could be exploited or triggered by a threat source.” 

In order to be successful ethical hackers must take the perspective of malicious threat actors. Ethical hackers step into the shoes of threat actors and view an organization’s defenses from the perspective and mindset of a potential attacker. Ethical hackers must take active measures to probe cyberdefenses for vulnerabilities which would allow them to position a successful cyber attack. The success of ethical hackers in identifying vulnerabilities reduces or eliminates the potential opportunity for the next real malicious threat actor.

Interaction with ethical hackers must be subject to important ground rules agreed upon between the ethical hacker and the organization. The most important ground rules for engagement pertains will be established in a vulnerability disclosure policy.

What are the key components of a vulnerability disclosure policy?

Commitment.

The introductory section  provides background information on the organization and its commitment to security and more. This section explains why the policy was created and the goals of the policy. It is a statement of good will and encouragement – reporting vulnerabilities is of potentially high value. Vulnerability reporting can reduce risk and potentially eliminate the expense and damage to reputation caused by a successful cyberattack.

Safe Harbor.

This section explicitly declares the organization’s commitment not to take legal action for security research activities that follow a “a good faith” effort to follow the policy. The authorization and safe harbor clearly states that good faith efforts will not result in the initiation of legal action. The language recommended by CISA for government agency vulnerability disclosure policy authorization and safe harbor is: 

“If you make a good faith effort to comply with this policy during your security research, we will consider your research to be authorized. We will work with you to understand and resolve the issue quickly, and AGENCY NAME will not recommend or pursue legal action related to your research. Should legal action be initiated by a third party against you for activities that were conducted in accordance with this policy, we will make this authorization known.”

Important Guidelines.

Guidelines further set the boundaries of the rules of engagement for ethical hackers. Guidelines may include an explicit request to provide notification as soon as possible after the discovery of a potential security vulnerability. It is common that exploits should only be used to confirm a vulnerability. Many vulnerability disclosure policies request that discovered exploits not be used  to further compromise data, establish persistence in other areas, or move to other systems.

Scope.

Scope provides a very explicit view of the properties and internet connected systems which are covered by the policy, the products to which it may apply, and the vulnerability types that are applicable. Scope should also include any testing methodologies which are not authorized. For example, it is most typical that VDPs don’t allow Denial of service (DoS or DDoS) attacks or attacks of a more physical nature such as attempting to access the facility. It is often the case that social engineering, perhaps through phishing, is also not authorized. Situations vary and it is important to spell out exactly what is permissible and what is not.

Process.

Process includes the mechanisms used by ethical hackers to correctly report vulnerabilities. This section includes instructions on where the reports should be sent. It also includes the information that the organization requires to find and analyze the vulnerability. This may include the location of the vulnerability, the potential impact and other technical information required to identify and reproduce the vulnerability. It also should include information about the timeframe for the acknowledgement of receipt for the report.

Best practice is to allow ethical hackers the option of submitting vulnerability reports anonymously. In this case the vulnerability disclosure policy would not require the submission of identifying information. 

Examples of Vulnerability Disclosure Policies

The Department of Homeland Security has published a vulnerability disclosure policy template: https://cyber.dhs.gov/bod/20-01/vdp-template/. The template spells out sections in a policy template for introduction, authorization, guidelines, test methods, scope, reporting a vulnerability, and the expectations and deliverables from both parties. 

Other examples of active vulnerability disclosure policies from both government and commercial enterprise referenced as examples include:

U.S. Department of the Interior  

U.S. Department of the Treasury

U.S. Department of Health and Human Services

U.S. Department of Education

U.S. Department of Transportation

U.S. Department of Justice

Bank of England

Deutsche Bank

Saxo Bank

Starling Bank

Trade-offs in Disclosure Policy Definition

Responsible disclosure allows for the disclosure of a vulnerability only in a timeframe subsequent to the elimination of the vulnerability. Developers and vendors may need a considerable amount of time to patch the vulnerability. By limiting the information flow it can be argued that risk will be lower since less threat actors may be aware of the vulnerability. However all it takes is one motivated threat actor to discover the vulnerability on their own. In this scenario responsible disclosure may give knowledgeable threat actors more time to exploit the weaknesses and complete a successful breach. Vulnerability disclosure policies generally specify the need for responsible disclosure. Responsible disclosure is by far preferred by the impacted organizations.

Full disclosure is the other side of the spectrum. If an ethical hacker has done everything possible to alert an organization of a vulnerability, and has been unsuccessful, then full disclosure could emerge as an option of last resort.  Full disclosure pivots the playing field towards an assumption that there is always a threat actor that is aware of any vulnerability. Therefore the newly discovered vulnerability presents significant risk, and must be disclosed as early as possible. In this scenario the disclosure puts pressure on the affected parties to move rapidly to take necessary precautions. On balance, the use of full disclosure trades the risk of exploitation and wider investment in the vulnerability for wider research support and advanced preparation by cyber defenders. Decisions for full disclosure are usually made by the ethical hacker, but are not encouraged by the organization impacted. 

Framework Standards

ISO provides a guideline which shares excellent guidance on the disclosure of vulnerabilities in products and services. Vulnerability disclosure helps prioritize risk, better defend systems and data and supports the prioritization of cybersecurity investments. Coordinated vulnerability disclosure is especially important when multiple vendors are affected. For more information please refer to https://www.iso.org/standard/72311.html

ISO also provides guideline https://www.iso.org/standard/69725.html which shares requirements and recommendations for how to process and remediate potential vulnerabilities. 

Vulnerability Disclosure Programs and Policies Bring Compelling Value

Bugcrowd connects companies and their applications to a Crowd of highly specialized network of security researchers. The Crowd can identify critical software vulnerabilities faster than traditional methods. Powered by Bugcrowd crowdsourced security platform, organizations of all sizes can run security programs to efficiently test their applications and remediate vulnerabilities before they are exploited.

Now that we’ve covered vulnerability disclosure policy, dive deeper into vulnerability disclosure programs in The Ultimate Guide to Vulnerability Disclosure. If you’re interested in setting up your own program, get started today. 

The post Vulnerability Disclosure Policy: What is It & Why is it Important? appeared first on Bugcrowd.

]]>
The Power of Numbers: Benefits of Crowdsourced Security Testing https://www.bugcrowd.com/blog/what-are-the-benefits-of-crowdsourced-security-testing/ Fri, 01 Dec 2023 14:01:00 +0000 https://live-bug-crowd.pantheonsite.io/?p=11389 Software is becoming more complex every year. We see new tools for development, increased automation through AI, and an ever-growing list of environments for building and devices for usage of the software that we interact with on a daily basis. This is driven by technological trends that we can expect to continue and accelerate, from […]

The post The Power of Numbers: Benefits of Crowdsourced Security Testing appeared first on Bugcrowd.

]]>
Software is becoming more complex every year. We see new tools for development, increased automation through AI, and an ever-growing list of environments for building and devices for usage of the software that we interact with on a daily basis. This is driven by technological trends that we can expect to continue and accelerate, from sophisticated AI, to the blurring of the boundary between corporate and personal networks in a remote-first working world. On top of this, we see widespread pressure to release software more quickly and automate more processes, all of which is creating new security concerns.

As software development gets faster and more opaque, security has become more important and more difficult, which has revealed the shortcomings of traditional security testing. This market need comes alongside technical developments that have led to innovation in cybersecurity, and in particular, the growth of the now-mature category of crowdsourced security.

What is crowdsourced security?

Crowdsourced security is an approach to securing digital assets that leverages the collective skill and experience of ethical hackers to tap into the wisdom of the crowd, where large and diverse groups can make discoveries more effectively than individuals.

These hackers are given direction, scope, and sometimes financial incentives to identify and report vulnerabilities, or bugs, simulating techniques used by threat actors. Owners of these digital assets will then remediate the issue and offer public recognition for the hacker’s work (in the case of a vulnerability disclosure program) or financial rewards corresponding to the criticality of the bug (in the case of a bug bounty program).

Investing in crowdsourced security testing means tapping into the breadth of the world’s talent and all of the talent, experience, and cognitive diversity that comes with it. Here are just some of the benefits that it provides.

What are the types of crowdsourced security solutions?

The most common crowdsourced security solutions are Vulnerability Disclosure Programs (VDP), Bug Bounty Programs, and Penetration Testing

    • VDPs—VDPs are a framework put in place by organizations to encourage hackers to share any vulnerabilities they discover with the asset’s owner. They offer safe harbor clauses that provide legal protection to good faith hackers, and should also offer public disclosure of valid submissions to acknowledge the help provided by hackers who take the time to help with security.
    • Bug Bounty Programs—Bug Bounty Programs are security initiatives that incentivize hackers to find and report vulnerabilities in an organization’s products and digital infrastructure. These programs lay out the scope of the asset that is open to testing, and offer financial rewards based on the criticality of bugs that are discovered and shared. Managed bounty programs have been around in this form since Casey Ellis founded Bugcrowd in 2012, and they are the crowdsourced security that hackers engage with the most. 
    • Penetration Testing—Pen testing is a simulated cyberattack carried out by an authorized third party (known as pen testers) who tests and evaluates the security vulnerabilities of a target organization’s computer systems, networks, and application infrastructure. 

There are also technical forms of crowdsourced testing such as attack surface management, where specialist hackers are tasked with defining a company’s network, finding shadow and legacy IT in the process. Hackers deploy automated tools and human ingenuity prioritize an organization’s assets in terms of risk, whether it is AWS buckets holding vital data, a poorly configured IoT toaster that came with an acquisition, or the CEO’s laptop. The skillset required for high-quality attack surface management is rarely found in generalist security professionals, so crowdsourced solutions often make the most sense economically. 

Who performs crowdsourced security testing?

Crowdsourced security testing draws from hackers from around the world. For Bugcrowd, this means “The Crowd”; a community of security experts who dedicate themselves to finding and fixing security vulnerabilities. 

The range of hackers involved in crowdsourced testing can vary based on the nature of the client and what they are testing. With VDPs, hackers self-select by finding and submitting bugs informally. In contrast, selection for Bug Bounty Programs can range from public programs open to every member of The Crowd, to private programs limited to experts with the highest form of security clearance. Attack surface management tends to be performed by specialists who bring expertise in scanning technologies to the table, alongside deep knowledge of network risks. 

How much access is given to crowdsourced hackers?

In theory all assets can be improved by crowdsourced security testing, but in practice scope of access varies between assignments and types of testing. For VDPs an organization’s entire assets are considered in scope unless otherwise stated, whereas bug bounties tend to focus more on specific products or infrastructure. Access tends to be based on budget and capacity – buyers will prioritize assets based on budget for incentives and internal capacity to remediate bugs. 

The limitations of traditional security testing

Traditional security testing methods, such as in-house penetration testing (pen testing) or automated vulnerability scanning, have limitations. Some of the issues of traditional security testing include: 

  • Time-consuming
  • Expensive
  • Cumbersome delivery
  • Poor security ROI
  • Difficult to scale
  • Delayed results
  • Questionable skill fit
  • Checklist-focused
  • Lack of ingenuity
  • No progress visibility
  • Siloed and inactionable
  • Low impact

Traditional security measures remain crucial and are not going anywhere. Best practice in patching, access controls, firewalls, and other elements of security hygiene remain a core necessity to keeping data safe. But when it comes to identifying and tackling sophisticated threats, traditional security testing has its limits.

While some security testing should always remain in-house, you should not be limited to this approach to securing data. We live in a digital age of abundance, and this should be seen as an asset to your security rather than just a source of threats.

What are the benefits of crowdsourced security testing?

Larger testing pool

Crowdsourced testing brings more brainpower to the task of securing your assets. Back in the late nineties, Eric S Raymond suggested that “with enough eyes, all bugs are shallow.” Since then, the amount of eyeballs that can be turned to finding bugs has increased dramatically, and this increase in supply provides a better quality of security testing.

As well as the weight of numbers, “The Crowd” brings diversity of outlook, approach, and experience. Internal testers will be limited to their own expertise when assessing vulnerabilities and threats, but crowdsourced security draws from hackers of different ages and backgrounds living all over the world. Bringing this formidable group together provides far more creativity and more comprehensive testing to your security needs.

Cost-effective testing

Over a hundred years ago, US retail magnate John Wanamaker complained that half the money he spent on advertising was wasted, but he didn’t know which half. Until recently, something similar could be said of security investment, with money spent by Chief Information Security Officers (CISOs) often lacking a clear read-through to a company’s bottom line.

Crowdsourced security is part of the solution to this, by offering financial incentives to hackers based on the criticality of the bugs that they uncover. In the past, you had to pay for testing and hope for results. Now, the testing takes place in the background so your in-house security team can focus on strategic initiatives.

Reduced risk of bias

In traditional security testing, internal and even external testers may have a bias for the technology, tactics, techniques, and procedures with which they are most familiar. When picking from a limited pool of talent, you are bound to have blindspots, even with the most brilliant individuals. This is especially true for new and emerging technology and techniques.

With crowdsourced testing, market conditions incentivize diverse approaches by rewarding the results they produce, and this eliminates any potential biases. Hackers from different jurisdictions will have more exposure to new technology, and will bring a mindset that challenges any status quo in security testing. No small team can cover the range of cognitive diversity and technical experience that The Crowd brings.

Coverage around the clock

Malicious actors are geographically distributed across time zones and don’t work conventional 9-5 hours. This has caused headaches for CISOs and executive teams facing breaches and urgent vulnerabilities overnight and at weekends, who often struggle to find the resources needed to deal with a threat.

Crowdsourced security taps into distributed security talent to turn this from a weakness to a strength. By investing in crowdsourced testing, you will have all of the capacity that you could need, when you need it. This allows you to access round-the-clock security testing from hackers. Organizations like Bugcrowd also include validation and triage services from a global team of experts, handling the most critical submissions within hours. This allows your team to quickly remediate and resolve vulnerabilities using critical context, helping you focus on what’s most important.

The ability to scale up testing capacity is not just useful out-of-hours. Sometimes vulnerabilities will emerge in software that is of critical importance to your business, and in these situations finding and fixing vulnerabilities becomes an urgent priority.

Investing in crowdsourced security lets you apply a lot of talent to this problem far more quickly than traditional security testing would allow. When the Log4J vulnerability emerged in December of 2021, Bugcrowd’s platform saw an enormous spike in activity, allowing buyers to remediate their most critical vulnerabilities in under three hours. This ability to scale capacity to meet emergent threats is an advantage of crowdsourced security over traditional testing.

Building relations with the hacking community

Crowdsourced testing is an effective approach to security, and this is known and respected by hackers and the wider security community. By investing in this approach to testing, you win respect within the industry that creates a virtuous cycle, making the best hackers more inclined to work with you.

This respect can extend to attracting talent, as it puts your company on the radar for security professionals eager to work in organizations that embrace best practice and work with the world’s best hackers. It can even extend to software developers and other technology professionals adjacent to security.

What are the pros and cons of crowdsourced security testing?

PROS CONS
Depth of expertise: Taps into The Crowd’s massive global expertise. New business case may be needed: For organizations who have never used crowdsourced security before, organizations might need to build a new business case.
Return on investment: Paying for results offers a clear demonstration of value for security and finance teams. Community engagement: While it is a transaction in a market, buyers need to respect the hacker community and its norms.
Flexible capacity: Allows you to get results around the clock and scale up testing in response to urgent needs.

Summary—The Benefits of Crowdsourced Security Testing

We live in a world of digital abundance, and securing data and infrastructure means making this work for you rather than against you. Crowdsourced security offers you access to The Crowd’s diverse skillset, with collective experience of solving more problems than any individuals can even comprehend. 

By opting to pay for results instead of time, you can get the benefits of this hive mind while sticking to a reasonable budget. This is hard for those whose spending is restricted to traditional security services, or who struggle to unlock the potential for crowdsourced testing. Crowdsourced testing requires a new business case, but one that is necessary for today’s threats. 

Crowdsourced Security and the Bugcrowd Platform

Bugcrowd has been a pioneer since it was founded as the first crowdsourced security testing platform back in 2012. We offer testing such as bug bounties, pen tests, vulnerability disclosure programs, and more at scale and in an integrated, coordinated way.

Our platform includes access to a team of global security engineers who work as an extension to the platform, triaging and validating submissions so that the most critical bugs can be resolved within hours. 

To see what crowdsourced security testing looks like in practice, take a 5-minute tour. This overview shows how the Bugcrowd Platform connects you with trusted hackers to help you take back control and stay ahead of malicious actors. 

The post The Power of Numbers: Benefits of Crowdsourced Security Testing appeared first on Bugcrowd.

]]>
13 Scary Security Stats that will Haunt You https://www.bugcrowd.com/blog/13-scary-security-stats-that-will-haunt-you/ Tue, 17 Oct 2023 14:00:06 +0000 https://live-bug-crowd.pantheonsite.io/?p=10784 Every year on Halloween Security threats will make you scream There’s no need to fear You’ll be in the clear If Bugcrowd is part of your team! 2023’s Most Terrifying Security Stats Usually at Bugcrowd, we don’t like to buy into language that promotes fear, uncertainty, and doubt in order to scare people about the […]

The post 13 Scary Security Stats that will Haunt You appeared first on Bugcrowd.

]]>
Every year on Halloween

Security threats will make you scream

There’s no need to fear

You’ll be in the clear

If Bugcrowd is part of your team!

2023’s Most Terrifying Security Stats

Usually at Bugcrowd, we don’t like to buy into language that promotes fear, uncertainty, and doubt in order to scare people about the state of cybersecurity. But considering it is officially Spooky Season, we figure it’s ok to bring up the nightmares that haunt us in the middle of the night.

It’s not just the Halloween season that has this on our minds, considering October is also Cybersecurity Awareness Month. The 20th annual Cybersecurity Awareness Month is a collaborative effort between the U.S. government and the security industry to enhance cybersecurity awareness, encourage actions by the public to reduce online risk, and generate discussion on cyber threats on a national and global scale. This year’s theme is Secure our World.

For cybersecurity professionals, Cybersecurity Awareness Month can feel a little bit repetitive. Security is always top of mind as professionals constantly work to keep their organizations secure. Frankly, their jobs are getting harder. 89% of security professionals believe that security threats are more serious now than before the pandemic and 88% of security professionals believe their roles have become more difficult in the past three years.

This infographic covers some of Bugcrowd’s biggest findings from recent reports like Inside the Mind of a Hacker. It highlights concerning trends in the security industry with downright terrifying consequences. In cybersecurity, the spooky season lasts longer than October. Check out this infographic, if you dare…

 

The post 13 Scary Security Stats that will Haunt You appeared first on Bugcrowd.

]]>
The Top Five Generative AI Findings from Inside the Mind of a Hacker https://www.bugcrowd.com/blog/the-top-five-generative-ai-findings-from-inside-the-mind-of-a-hacker/ Wed, 30 Aug 2023 18:58:52 +0000 https://live-bug-crowd.pantheonsite.io/?p=10418 Every year when Bugcrowd’s flagship report, Inside the Mind of a Hacker, comes out, readers get the newest data and trends around the hacker community–from demographics to motivations. In the 2023 edition, we surprised readers with a special section all about generative AI. For so long, AI felt like a looming storm cloud–distant yet ominous. […]

The post The Top Five Generative AI Findings from Inside the Mind of a Hacker appeared first on Bugcrowd.

]]>
Every year when Bugcrowd’s flagship report, Inside the Mind of a Hacker, comes out, readers get the newest data and trends around the hacker community–from demographics to motivations. In the 2023 edition, we surprised readers with a special section all about generative AI. For so long, AI felt like a looming storm cloud–distant yet ominous. But almost overnight, generative AI became accessible to the masses. The internet filled with fear-mongering articles covering the terrifying consequences AI could have on cybersecurity, but in Inside the Mind of a Hacker, we wanted to talk about some of the cool ways hackers are using AI to make the world a safer place.

We compiled an infographic of some of the biggest generative AI findings, which you can check out below. Here are five of the most surprising findings:

1. 94% of hackers already use AI or plan to start using it in the future to help them ethically hack

 

2. 72% of hackers do not believe AI will ever replicate the human creativity of hackers

 

3. 91% of hackers believe that AI technologies have increased the value of hacking or will increase its value in the future

 

4. 98% of hackers using generative AI for security research use ChatGPT

 

5. 78% of hackers believe that AI will disrupt the way ethical hackers conduct penetration testing or work on bug bounty programs

 

The post The Top Five Generative AI Findings from Inside the Mind of a Hacker appeared first on Bugcrowd.

]]>
Q&A with Nick McKenzie: CISO Advice, Generative AI, and Security Predictions https://www.bugcrowd.com/blog/q-a-with-ciso-nick-mckenzie/ Tue, 15 Aug 2023 13:30:03 +0000 https://live-bug-crowd.pantheonsite.io/?p=10292 Bugcrowd recently released the seventh edition of our annual flagship report, Inside the Mind of a Hacker. This report explores trends in ethical hacking, the motivations behind these hackers, and how organizations are leveraging the hacking community to elevate their security posture. This year’s edition takes a special look at the ways cybersecurity is changing […]

The post Q&A with Nick McKenzie: CISO Advice, Generative AI, and Security Predictions appeared first on Bugcrowd.

]]>
Bugcrowd recently released the seventh edition of our annual flagship report, Inside the Mind of a Hacker. This report explores trends in ethical hacking, the motivations behind these hackers, and how organizations are leveraging the hacking community to elevate their security posture. This year’s edition takes a special look at the ways cybersecurity is changing as a result of the mainstream adoption of generative AI. As a part of this exploration, we interviewed Nick McKenzie, CISO at Bugcrowd. We’ve included a snippet of that interview in this blog post. Download the report here to learn more about how hackers are using AI technologies to increase the value of their work. 

Tell us a little bit about yourself.

I’ve been in the cybersecurity industry for almost 25 years, and I’ve seen a shocking amount of change. Before Bugcrowd, I served as executive general manager and CSO at National Australia Bank (NAB), one of Australia’s four largest financial institutions. At NAB, I was responsible for overseeing the enterprise security portfolio, which included cyber, physical security, investigations, and operational fraud capabilities to protect customers and employees, support business growth, and enable an operationally resilient bank. 

I currently serve as an advisory board member for Google, Amazon Web Services, Netskope, and Digital Shadows.  

What are the most demanding challenges that CISOs are currently facing in their roles?

CISOs juggle multiple responsibilities, including maintaining a secure foundation and protecting against ever-evolving threats while trying to attract top talent in a highly competitive environment. CISOs must strike a balance between enabling business agility and providing robust protection—all while navigating the intricacies of country-specific technologies and cyber regulations. 

How should CISOs approach working with hackers and implementing crowdsourced security?

By leveraging a select number of curated hackers with small-scope proof of value (POV), CISOs can safely and effectively mitigate the perceived risk of crowdsourced security. Running this POV gives a CISO’s team familiarity with the platform, triage services, and customer success capabilities. As CISOs become more accustomed to the crowdsourced model, they are likely to go wider and deeper—sometimes straight to a public program to glean the ultimate benefits from a bigger, more diverse community of hackers.

In my personal view, the adoption of crowdsourced security does not increase operational risk; instead, it only decreases risk, as it enables the earlier identification of vulnerabilities harvested by experts in the security community before attackers can discover and exploit them. 

In the age of AI, could generative technologies outpace an organization’s ability to establish effective cybersecurity measures?

AI has progressed to the point where it is being used to both weaponize and circumvent traditional controls in organizations’ defenses. For example, more advanced malware, phishing campaigns, deep fakes, and voice cloning are continually being developed. 

As AI advances, CISOs must adapt existing security measures—or introduce new ones—to counter the increasingly sophisticated threats posed by generative technologies. 

Given the potential misuse of generative AI by cybercriminals, should there be stricter regulations on its development and use by hackers, or would that hinder innovation?

Imposing restrictions on the use of generative AI for the hacking community would hinder creativity and create the opposite intended effect. Regulations should be put in place across industries and organizations; rather than restricted to hackers. 

How can CISOs strike a balance between enjoying the benefits of generative AI and ensuring they don’t inadvertently contribute to the rise of more sophisticated cyberattacks? 

CISOs must be aware of the duality of generative AI to both benefit from it and prevent its misuse by attackers or employers. Ultimately, it’s a tug of war between threat actors and defenders, who are constantly trying to evolve with the use of AI to outsmart each other. 

Could an increased reliance on generative AI displace human intelligence and diminish the value of hackers?

Generative AI will certainly help with speed and accuracy in vulnerability analysis, but it cannot replace the creativity and diverse perspectives of human hackers. Hackers spend long, arduous hours deconstructing a complex problem or unveiling an abstract vulnerability; presently, this is something that modern AI systems struggle with. 

Considering recent economic headwinds, what suggestions can you give to fellow CISOs who want to increase the ROI from security programs without significantly increasing their budgets?

CISOs should consider investing in newer frameworks and products such as bug bounty programs or penetration testing as a service, which improve time-to-remediation (TTR), digitize the experience end to end, and deliver continuous outcomes across an evolving attack surface. 

What do you predict the next two years of crowdsourced security will look like, and how is Bugcrowd planning to give hackers and customers the best experience?

In the next two years, crowdsourced security will become the preferred model for continuous assurance, incorporating generative AI to improve customer experiences—through things like improved triage and increased integration capabilities—and eventually expand the usage of hacker data. 

The post Q&A with Nick McKenzie: CISO Advice, Generative AI, and Security Predictions appeared first on Bugcrowd.

]]>
Cybersecurity and Generative AI Predictions with David Fairman, CIO and CSO of Netskope https://www.bugcrowd.com/blog/cybersecurity-and-generative-ai-predictions-with-david-fairman-cio-and-cso-of-netskope/ Tue, 08 Aug 2023 13:30:59 +0000 https://live-bug-crowd.pantheonsite.io/?p=10288 Bugcrowd recently released the seventh edition of our annual flagship report, Inside the Mind of a Hacker. This report explores trends in ethical hacking, the motivations behind these hackers, and how organizations are leveraging the hacking community to elevate their security posture. This year’s edition takes a special look at the ways cybersecurity is changing […]

The post Cybersecurity and Generative AI Predictions with David Fairman, CIO and CSO of Netskope appeared first on Bugcrowd.

]]>
Bugcrowd recently released the seventh edition of our annual flagship report, Inside the Mind of a Hacker. This report explores trends in ethical hacking, the motivations behind these hackers, and how organizations are leveraging the hacking community to elevate their security posture. This year’s edition takes a special look at the ways cybersecurity is changing as a result of the mainstream adoption of generative AI. As a part of this exploration, we interviewed David Fairman, CIO and CSO of Netskope. We’ve included a sneak peak of that interview in this blog post. Download the report here to learn more about how hackers are using AI technologies to increase the value of their work. 

Tell us a little bit about yourself and Netskope.

I have over 20 years of security experience in a range of disciplines from fraud and financial crime to business continuity to operational risk. I’ve worked for, and consulted to, several large financial institutions and Fortune 500 companies across the globe, been recognized as one of the top CISOs to know, am a published author, an adjunct professor, and was involved in founding several industry alliances with the aim of making it safer to do business in the digital world. 

For the past three years, I’ve been Chief Information Officer and Chief Security Officer for the Asia Pacific region at Netskope. Netskope is a global SASE leader helping organizations apply zero trust principles to protect data and modernize their security and network infrastructure. Netskope has been a Bugcrowd customer for over a year. 

How are generative AI applications revolutionizing the way organizations operate, and what are the potential cybersecurity risks associated with their use?

AI has been around for many years, so there are a number of risks associated with AI. AI is transforming business through hyper-automation, identifying new business models and trends, speeding up decision making, and increasing customer satisfaction.

Prior to late 2022, AI required specialized skill sets and vast amounts of training data; consequently, it was not used in the mainstream. The launch of ChatGPT made generative AI accessible to the masses. The barrier to entry has lowered, which means the adoption and use of this powerful technology is being taken up at a rapid pace. This means the risks that are associated with AI can have a large impact, more so than ever before. 

There are a number of risks that need to be considered, including data poisoning, prompt injection, and model inference—and these are just a few of the technical risks.  There are also responsible AI elements that need to be considered, such as bias and fairness, security and privacy, robustness and traceability.

What are the possible ways sensitive data can be inadvertently exposed through generative AI applications, and how can organizations mitigate these risks? 

Generative AI uses prompts to take inputs from a user and produce an output based on its logic and learning. Users can input sensitive data, such as personal information and proprietary source code into the large language model (LLM). This information could then be accessed or produced as output for other uses of the LLM. Users should be cognizant of the fact that any data they input into an LLM will be treated as public data.

Many organizations are asking—should we permit our employees to use generative AI applications like ChatGPT or Bard? The answer is yes, but only with the right modern data protection controls in place. 

What impact does the use of generative AI have on threat attribution, and could it blur the lines between adversaries, making it challenging for organizations or governments to respond effectively? 

There are two sides to this question. On one hand, defenders will be able to use AI to perform threat attribution (and threat intelligence more broadly) to speed up the process, better defend their organizations, and respond more effectively than ever before. 

Conversely, threat actors will be using this to their advantage to increase their capability to attackat a scale and velocity never seen before. We, the defenders, need to lean into how we can leverage this to transform our defensive capabilities. 

Could generative AI applications lead to the development of “self-healing systems,” and if so, how might this change the way organizations approach cybersecurity? 

I think this has to be the case. I’ve said this for a long timewe need to find ways to operate at machine speed. When we talk about ‘mean time-to-detect’ and ‘mean  time-to-contain,’we’re reliant on human beings in the process, which can slow it down significantly. We know that time is critical when it comes to defending an organizationthe faster, more efficiently we do this, the better we will protect our companies and customers.  Self-healing systems will be one piece in this jigsaw puzzle.

As generative AI becomes more prevalent in cybersecurity, how do you think the role of security professionals will evolve, and what implications does a future with more human-machine collaboration have for informed decision making in cybersecurity?

I think cyber practitioners increasingly become the ‘trainers’ of AIusing their cyber expertise to train models to perform cyber analysis at pace and at scale. There will always be a need to have a human in the loop in some respect, whether that be in the training of the model, the monitoring and supervision of the model (to ensure that it is behaving the way it is expected and is not being manipulated), or in the generation of new models.

The post Cybersecurity and Generative AI Predictions with David Fairman, CIO and CSO of Netskope appeared first on Bugcrowd.

]]>