Authors: Ayman Khalil & Paul Gedeon
Beyond the Hoodie
Cyberattacks are no longer rare, one-off events. For many organizations, they’re part of the weekly reality, something you plan for, like outages, supplier delays, or equipment failures. Because much of our economy now depends on connected systems, penetration testing has moved from “a technical nice-to-have” to a business decision that leadership is expected to understand and support.
So how did we get here? What changed technically, economically, culturally, that pushed pentesting from the back office to the boardroom?
Before we go further, it’s worth asking a simple question: when you hear “cybersecurity”, “penetration testing”, or “hacking” (ethical or not), what’s the first image that comes to mind?
If your brain still goes straight to the hoodie-and-terminal stereotype, don’t worry. Mine does too sometimes. It’s a potent myth. It’s cinematic. It’s also misleading. Because the real evolution of penetration testing isn’t about hackers getting more stylish, it’s about the role changing shape again and again until it becomes something businesses rely on rather than something they tolerate.
Part 1: History
Penetration testing started as a kind of controlled skepticism.
In early computing, security wasn’t a product line. The question was: “What happens if someone tries to do the wrong thing on purpose?” People probed systems because they were mainly curious, were paid to do so, or both. The earliest “tiger teams”1 weren’t trying to produce glossy reports. They were trying to see whether a system could survive contact with an intelligent adversary.
The networks arrived. Complexity multiplied. And the role shifted:
- The breaker role (proof of possibility)
Early security testing was often about demonstrating that something could be broken. “Look, this assumption doesn’t hold”. It was technical and sometimes uncomfortable, because no one likes being shown that their system is fragile. - The professional tester role (proof with permission)
As laws matured and organizations began to treat unauthorized access as a criminal offense, pentesting became formalized: authorization, scope, documentation, and rules of engagement. The tester was no longer merely a curious mind. They were a contracted, accountable professional. - The risk translator (proof of impact)
At some point, “I found a vulnerability” stopped being enough. Leaders need to understand consequences. So the tester became a translator: “Here’s what I can do with it. Here’s what it means. Here’s how bad it gets if the attacker isn’t me and exploits it. - The assurance producer (Proof that stands up externally)
This is the biggest one in Europe. Pentesting increasingly supports assurance. Not just internal comfort, but also external trust among customers, partners, auditors, and, at times, authorities. The work must be reproducible, evidence-based, and defensible.
That’s the real arc. The tools changed. The threats changed. But the most significant transformation is this: Penetration testing moved from “Technical restriction” to “Evidence Generation”.
Part 2: The European Landscape

Europe’s cybersecurity ecosystem often gets described as “standards-driven”, but the truth is more nuanced. The European market mixes several layers that people love to mash together (and then everyone gets annoyed):
- Directive/Regulation: The legal obligation (what you must do).
- Harmonised standards (EN): Technical means supporting in demonstrating compliance (how you can do it, to support presumption of conformity where applicable).
- Referential: A national baseline framework (often used to qualify providers or structure audits).
- Certification scheme: A formal assurance process producing recognized outputs (certificate, evaluation report), with defined methodology and oversight.
This matters because penetration testing is performed differently across layers.
- If you’re doing a classic pentest for improvement, the goal is risk reduction and engineering fixes.
- If you’re under a certification scheme, the goal will include producing evidence in a format that is reproducible and reviewable.
Today, penetration testing is no longer a simple technical exercise. But we could better say that today, penetration testing is increasingly part of the national and European assurance ecosystem, qualification frameworks, and certification schemes overseen by authorities (for example, ANSSI in France, BSI in Germany, etc). And that’s the difference between “we tested” and “we can prove it in a way others can trust”.
Part 3: Tooling
Let’s discuss tools, because clients ask about them frequently. However, let us not pretend the tool list is the answer. A tool is a pencil, while the tests are the drawing.
The main tools commonly used in European reports:
- Information systems + web + API testing:
- Burp Suite / OWASP ZAP / Caido: Web and API interception, auth testing, logic flaws.
- Nmap: Discovery, service mapping, exposure verification.
- Wireshark: Traffic analysis, protocol behavior, “what’s really happening”.
- Nessus/OpenVAS: Baseline vulnerability scanning (proper coverage, not final truth).
- Postman: API exploration and behavior checks.
- Product / IoT Penetration Testing:
- Firmware extraction: Flash tools/programmers (e.g., CH341A) and flash reading utilities.
- Debug and interface work: OpenOCD (JTAG/SWD), serial adapters, logic analysers.
- Interface discovery: JTAGulator.
- Reverse engineering workflow: Disassembly, firmware triage/extraction pipelines, emulation when possible.
- Protocol interception depends on the connectivity protocol (BLE, BLE, Wi-Fi, NFC, CAN).
But what is a good penetration test supposed to do?
- Information discovery/reconnaissance: Identify the attack surface, including exposed services, endpoints, interfaces, versions, and trust boundaries.
- Threat-driven test design: Choose realistic attack paths based on architecture and attacker models (remote, local, physical access, supply chain).
- Vulnerability identification (manual + automated): Use scanners for coverage, manual testing for depth (authentication logic, cryptography misuse, insecure workflows)
- Exploitation and chaining (proof of impact): Demonstrate what an attacker can actually achieve: Data access, privilege escalation, device takeover, persistence.
- Evidence capture (reproducibility): Steps, traces, evidence. Enough data that another competent party can reproduce and verify.
- Remediation guidance + retest plan: Practical fixes, prioritization, and a path to verification.
Only then does the European additional layer emerge. National bodies and frameworks can guide methodology by establishing expectations regarding independence, documentation quality, testing depth, the chain of evidence, or laboratory conditions. In other words, the core process is universal, and the assurance context will come to add the criteria.
Part 4: Client expectations
France is an excellent example of the importance of vocabulary.
- PASSI: This is a referential/qualification framework for information system security audit providers (not an international standard or directive). In many contexts, clients care about the provider’s governance, competence, and audit discipline, not only the tester’s technical skills.
- SecNumCloud: Typically appears when the subject is cloud service assurance, again in a structured qualification context.
- Products: Clients may reference CSPN, the Common Criteria, or the EUCC approach, depending on the assurance target. These are certification schemes, and pentesting becomes part of a defined evaluation methodology, with expectations around repeatability and evidence.
What shows up in the reports? Often: Burp, Nmap, vulnerability scanners, and specific product tools when hardware is involved. But the deeper requirement is usually the structured method, clear evidence, and defensible conclusions.
German clients influenced by BSI ecosystems tend to value methodical work and traceable outputs. In product assurance contexts, certification pathways and structured evaluation expectations shape how pentesting evidence is produced. Even outside formal certification, reports are expected to be well done (impact, proof, remediation, prioritization).
Part 5: How often should vendors test?
This is the takeaway that actually sticks, because it’s the one manufacturers feel in their bones:
A pentest is a snapshot. Products evolve. Attackers don’t wait.
So yes, you should retest, but the “how often” depends on what you’re talking about.
For information systems (enterprise environment)
- At least annually as a baseline (common best practice)
- After major infrastructure changes
- After major security incidents
- When new exposure arises (e.g., new public services, new VPN, new IAM model, etc.)
For products:
This is the more important one for modern vendors:
- Every meaningful release (new firmware, new app build, new backend API behavior)
- After introducing new connectivity or protocols
- When security-relevant components change (crypto libraries, auth flows, update mechanism)
- Before major market milestones (launch, expansion, certification-related deadlines)
And with the emergence of lifecycle-focused regulatory expectations (CRA being the obvious example people discuss), the tone shifts from a nice practice to “expected discipline”. Not because a regulation magically makes hackers appear, but because it formalizes the idea that security is maintained over time rather than proven once.
If your product ships version 1.0, then 1.1, then 1.2… the attack surface shifts each time. Treating pentesting as a one-off event is like testing the brakes on a prototype, then changing the wheels and the suspension and saying, “It’ll probably be fine”.
Part 6: How Red Alert Labs can help
Red Alert Labs supports manufacturers and organizations with independent penetration testing that’s both technically deep and structured for real-world assurance needs.
Our approach is:
- Independent and evidence-driven (clear reproduction steps, defensible results)
- Référentiel / scheme-aware when required by the client context
- Strong coverage from hardware to software for connected products: firmware extraction and reverse engineering, interface analysis (JTAG/UART/SPI/I2C), protocol interception, and the web/API/mobile surfaces that modern products depend on.
We also operate as an ISO 17025-accredited laboratory, demonstrating compliance with quality and assurance expectations, including the capability to support EUCC substantial-level contexts. Hence, the output isn’t just “we tested,” but “here’s evidence you can build decisions on.
Because in the end, penetration testing isn’t about looking cool. It’s about removing uncertainty before uncertainty removes your sleep.


- Tiger teams: A group of skilled hackers who devise and execute strategies to penetrate a company’s network systems. ↩︎







Leave a Reply