The Question of QSA Conflict

An interesting conversation over coffee with a client today gave me something to mull over a little. The question brought to the table was how some assessors, while engaged in audit, brought up other services they offer like ASV, penetration testing and vulnerability scan and how this may look like a conflict of interest issue.

I will start first by proclaiming that we aren’t QSAs. We do have a myriad of certifications such as ISO and other personal certs in information security, but this article isn’t about our resume. It’s the ever important question of the role of the QSA and whether they should be providing advisory services.

Why we choose not to go the route of QSAs is for another article, but suffice to say, in the same regard we work with CBs for ISO projects, we employ the same business model for PCI or any other certification projects. We rabidly believe in the clear demarcation of those doing the audit and those doing the implementation and advisory. After all, we are in the DNAs of statutory auditors and every single customers or potential customers we have require a specific conflict check, in order to ensure independence and not provide consulting work that may jeopardize our opinions when it comes to audit. Does anyone recall Enron? Worldcom? Waste Management? Goodbye, 90 year old accounting firm.

We have worked with many QSAs in almost 14 years of doing PCI-DSS – and here, QSAs I mean by individuals as well as QSA-Cs (QSA Companies). Our group here is collectively made up of senior practitioners in information security and compliance, so we don’t have fresh graduates or juniors going about advising 20 years plus C level veterans on how to run their networks or business.

A QSA (Qualified Security Assessor) company in a nutshell is a company that is qualified by the PCI Security Standards Council (PCI SSC) to perform assessments of organization against the PCI standards. Take note of the word: QUALIFIED. This becomes important because there is a very strict re-qualification program from the PCI-SSC to ensure that the quality of QSAs are maintained. Essentially, QSAs are vouched by the PCI SSC to carry out assessment tasks. Are all QSAs created equal? Probaby not as based on our experience some are probably better than others in specific aspects of PCI-DSS. Even the PCI SSC has a special group of QSAs under their Global Executive Assessor Roundtable (GEAR), which we will touch on later.

The primary function of a QSA company is to evaluate and verify an organisation’s adherence to the PCI DSS requirements. This involves a thorough examination of the organisation’s cardholder data environment (CDE) — including its security systems, network architecture, access controls, and policies — to ensure that they meet the PCI requirements.

Following the assessment, the QSA company will then prepare a Report on Compliance (RoC) and an Attestation of Compliance (AoC), which are formal documents that certify the organization’s compliance status. Please don’t get me started on the dang certificate because I will lose another year of my life with high blood pressure. These OFFICIAL documents are critical for the organization to demonstrate the company’s commitment to security to partners, customers, and regulatory bodies. The certificate, however, can be framed to be hanged on the wall of your toilet, where it rightfully belongs. Right next to the toilet paper, which has probably a slightly higher value.

Anyway, QSAs have very specific roles defined by the SSC:

– Validating and confirming Cardholder Data Environment (CDE) scope as defined by the assessed entity.
– Selecting employees, facilities, systems, and system components accurately representing the assessed environment if sampling is employed.
– Being present onsite at the assessed entity for the duration of each PCI DSS Assessment or perform remote assessment activities in accordance with applicable PCI SSC assessment guidance.
– Evaluating compensating controls, as applicable.
– Identifying and documenting items noted for improvement, as applicable.
– Evaluating customized controls and deriving testing procedures to test those controls, as applicable.
– Providing an opinion about whether the assessed entity meets PCI DSS Requirements.
– Effectively using the PCI DSS ROC Template to produce Reports on Compliance.
– Validating and attesting as to an entity’s PCI DSS compliance status.
– Maintaining documents, workpapers, and interview notes that were collected during the PCI DSS Assessment and used to validate the findings.
– Applying and maintaining independent judgement in all PCI DSS Assessment decisions.
– Conducting follow-up assessments, as needed

QSA PROGRAM GUIDE 2023

You can see above, there is no advisory, recommendation, consultation, implementation work listed. It’s purely assessment and audit. What we do see are more often than not, QSAs do offer other services under separate entities. This isn’t disallowed specifically, but the SSC does recommend a healthy dose of independence:

The QSA Company must have separation of duties controls in place to ensure Assessor Employees conducting or assisting with PCI SSC Assessments are independent and not subject to any conflict of interest.

QSA Qualification requirements 2023

Its hard to adjudge this point, but the one providing the audit shouldn’t be the one providing the consultation and advisory services. Some companies get around this by having a separate arm providing special consultation. Which is where we step in, as without doing any gymnastics in organizational reference, we make a clear demarcation of who does the audit and who does the consultation and advisory role.

The next time you receive any proposal, be sure to ask the pertinent question: are they also providing support and advisory? Because a good part of the project is in that, not so much of the audit. We have actually seen cases where the engaged assessor flat out refused to provide any consultative or advisory or templates or anything to assist the customer due to conflict of interest, leaving the client hanging high and dry unless they engage another consultative project with them separately. Is that the assessor’s fault? In theory, the assessor is simply abiding with the requirements for independence. On the other hand, these things should have been mentioned before the engagement, that a bulk of the PCI project would be in the remediation part and definitely guidance and consultation would be needed! It might reek of being a little disingenuous. It’s frustrating for us when we get pulled in halfway through a project and we ask, well why haven’t you query your engaged QSA on this question? Well, because they want another sum of money for their consultative works, or they keep upselling us services that we are not sure we need unless we get their advisory in. What do you think their advisory is going to say? You can see whereas on paper, it might be easy to state that independence has been established, in reality, it’s often difficult to distinguish where the audit, recommendations, advisory and services all start or end as sometimes it’s all mashed. Like potatoes.

Here’s the another official reference to this issue in FAQ #1562 (shortened)

If a QSA Employee(s) recommends, designs, develops, provides, or implements controls for an entity, it is a conflict of interest for the same QSA Employee(s) to assess that control(s) or the requirement(s) impacted by the control(s).

Another QSA Employee of the same QSA Company (or subcontracted QSA) – not involved in designing, developing, or implementing the controls – may assess the effectiveness of the control(s) and/or the requirement(s) impacted by the control(s). The QSA Company must ensure adequate, documented, and defendable separation of duties is in place within its organization to prevent independence conflicts.

FAQ #1562

Again, this is fairly clear that QSAs providing both assessment and advisory/implementation services are not incorrect in doing so, but need to ensure that proper safeguards are in place, presumably to be checked thoroughly by their requalification requirements, under section 2.2 “Independence” of the QSA requalification document. To save you time on reading, there isn’t much prescriptive way to ensure this independence, so we’re left to how the company decides on their conflict of interest policies. Our service is to ensure with confidence that the advice you receive is indeed independent and as much as we know, to benefit the customer, not the assessor. We don’t have skin in their services.

In summary, QSAs can theoretically provide services but it should come separately from the audit, so ensure you get the right understanding before starting off your PCI journey. Furthermore and more concerningly, we’ve seen QSAs refused to validate the scope provided to them, citing that this constitute ‘consulting and advisory’ and needs additional payment. This is literally the first task a QSA does in their list of responsibility, so call them out on it or call us in and let us deal with them. These charlatans shouldn’t even be QSAs in the first place if this is what they are saying.

And finally, speaking on QSAs that are worth their salt – the primary one we often work with Controlcase has been included in the PCI SSC Global Executive Assessor Roundtable 2024 (GEAR 2024).

https://www.pcisecuritystandards.org/about_us/global_executive_assessor_roundtable/

These are nominated as an Executive Committee level advisory board comprising senior executives from PCI SSC assessor companies, that serves as a direct channel for communication between the senior leadership of payment security assessors and PCI SSC senior leadership.

In other words, if you want to know who the SSC looks to for PCI input, these are the guys. Personally, especially for complex level 1 certification, this would be the first group of QSAs I would start considering before approaching others, as these are nominated based on reputation, endeavor and commitment to the security standards — not companies that cough out money to sponsor events or conferences, or look prominent in their dazzling booths, give free gifts but is ultimately unable to deliver their projects properly to their clients.

Let us know via email to pcidss@pkfmalaysia.com if you have any queries on PCI-DSS, especially the new version 4.0 or any other compliances such as ISO27001, NIST, RMIT etc!

Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

Zero Trust for 2024

As we enter into the new year, lets start off with a topic that most cybersecurity denizens would have heard of and let’s clarify it a little.

Zero Trust.

It seems a good place as any, to start 2024 off with the pessimism that accompanied the end of last year – the spate of cybersecurity attacks in 2023 had given us a taste of what is to come – insurance company – check, social security – check, the app with our vaccination information – check. While breaking down the attacks is meant for another article, what we are approaching now for the coming year is not just more of the same, but much more and more advanced attacks are bound to happen.

While Zero Trust is simply a concept – one of many – to increase resistance to attacks or breach, it’s by no means a silver bullet. There is NO silver bullet to this. We are in a constant siege of information warfare and the constant need to balance the need for sharing and the need for protection. It is as they say; the safest place would be in a cave. But that’s now living, that’s surviving. If you need to go somewhere, you need to fly, you have information with the airlines. If you need to do banking, you have information with the banks. If you need to conduct your daily shopping online, you are entrusting these guys like Lazada et al the information that otherwise you may not likely provide.

So Zero Trust isn’t the fact that you conduct zero transaction, its basically a simple principle: Trust no one, Verify everything. Compare it to the more traditional “trust but verify” approach, which assumed that everything inside an organisation’s network should be trusted, even if we do have verifications of it. Here’s a breakdown of the concept, in hopefully simpler terms.

The Basic Premise: Imagine a company as a fortified castle. In the old days, once you were inside the castle walls, it was assumed you belonged there and could roam freely. At least this is based on the limited studies we have done by binge watching Game of Thrones. All historical facts of the middle ages can be verified through Game of Thrones, including the correct anatomy of a dragon.

Back to the analogy, what if an enemy disguised as a friend managed to get inside? They would potentially have access to everything. Zero Trust Architecture operates on the assumption that threats can exist both outside and inside the walls. Therefore, it verifies everyone’s identity and privileges, no matter where they are, before granting access to the castle’s resources. The 3 keys you can remember can be:

  1. Never Trust, Always Verify: Zero Trust means no implicit trust is granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Basically, we are saying, I don’t care where you are or who you are, you are not having access to this system until I can verify who you are.
  2. Least Privilege Access: Individuals or systems are given the minimum levels of access — or permissions — needed to perform their tasks. This limits the potential damage from incidents such as breaches or employee mistakes. We see this issue a lot, whereby a C level person insist on having access to everything even if he doesn’t necessarily know how to navigate a system without a mouse. When asked why, they say, well, because I am the boss. No. In Zero Trust, in fact, because you are the boss, you shouldn’t have access into a system that does not require your meddling. Get more sales and let the tech guys do their job!
  3. Micro-Segmentation: The network is broken into smaller zones to maintain separate access for separate parts of the network. If a hacker breaches one segment, they won’t have access to the entire network.

The steps you can follow to implement the concept of Zero Trust:

Identify Sensitive Data: Know where your critical data is stored and who has access to it. You can’t protect everything. Or at least not with the budget you are given, which for most IT groups, usually is slightly more than they allocate to upkeep the company’s cat. So data identification is a must-have. Find out what is the data that you most want to protect and spend your shoe-string budget to protect it!

Verify Identity Rigorously: Use multi-factor authentication (MFA) and identity verification for anyone trying to access resources, especially important resources like logging systems, firewalls, external webservers etc. This could mean something you know (password), something you have (a smartphone or token), or something you are (biometrics). It used to cost a mortgage to implement things like this but over the years, cheaper solutions which are just as good are now available.

Contextual Access: Access decisions should consider the context. For example, accessing sensitive data from a company laptop in the office might be okay, but trying to access the same data from a personal device in a coffee shop might not be. This may not be easy, because now with mobile devices, you are basically accessing top secret information via the same device that you watch the cat playing the piano. Its a nightmare for IT security – but again, this has to have discipline. If you honestly need to access the server from Starbucks , then implement key controls like MFA, VPN, layered security and from a locked-down system.

Inspect and Log Traffic: Continuously monitor and log traffic for suspicious activity. If something unusual is detected, access can be automatically restricted. SOAR and SIEM products have advanced considerably over the years and today we have many solutions that do not require you to sell a kidney to use. This is beneficial as small companies are usually targeted for attacks, especially if these smaller companies services larger companies.

At the end, it all comes down to what are the benefits to adopt this approach.

Enhanced Security: By verifying everything, Zero Trust minimizes the chances of unauthorised access, thereby enhancing overall security. Hopefully. Of course, we may still have those authorised but have malicious intent, which would be much harder to protect from.

Data Protection: Sensitive data is better protected when access is tightly controlled and monitored. This equates to less quarter given to threat players out there.

Adaptability: Zero Trust is not tied to any one technology or platform and can adapt to the changing IT environment and emerging threats.

On the downside, there are still some challenges we need to surmount:

Complexity: Implementing Zero Trust can be complex, requiring changes in technology and culture. It’s not a single product but a security strategy that might involve various tools and technologies. This is not just a technical challenge as well, but a process and cultural change that may take time to adapt to.

User Experience: If not implemented thoughtfully, Zero Trust can lead to a cumbersome user experience with repeated authentication requests and restricted access. This is a problem we see a lot, especially in finance and insurance – user experience is key – but efficiency and security are like oil and water. Eternal enemies. Vader and Skywalker. Lex and Supes. United and Liverpool. Pineapple and Pizza.

Continuous Monitoring: Zero Trust requires continuous monitoring and adjustment of security policies and systems, which can be resource-intensive. We’ve seen implementation of SIEM and SOAR products which are basically producing so many alerts and alarms that it makes no sense anymore. These all become noise and the effects of monitoring is diluted.

In summary, an era where cyber threats are increasingly sophisticated and insiders can pose as much of a threat as external attackers, Zero Trust Architecture offers a robust framework for protecting an organisation’s critical assets. It’s about making our security proactive rather than reactive and ensuring that the right people have the right access at the right times, and under the right conditions. It’s culturally difficult, especially in Malaysia, where I will have to admit, our innate trust of people and our sense of bringing up means we always almost would open the door for the guy behind us to walk in, especially if he is dressed like the boss. We hardly would turn around and ask, “Who are you?” because we are such nice people in this country.

But, adopt we must. For any organisation looking to bolster its cybersecurity posture, Zero Trust isn’t just an option; it’s becoming a necessity. In PKF we have several services and products promoting Zero Trust – contact us at avantedge@pkfmalaysia.com and find out more. Happy New Year!

Gearing Up: How New Cybersecurity Guidelines Accelerate the Automotive Industry Security

So here you are, with your new spanking SUV that is fully EV and fully automated, with the most state of the art systems inbuilt. You get into the car, switch everything on, put in your favourite tune and head off to work. Suddenly, out of nowhere, your speakers go bonkers and suddenly says in an ominous voice, “Now I got you…” and your steering decides to turn despite your best effort to right it and the accelerator depresses despite you removing your feet off the pedal and your brakes don’t work anymore. You watch helplessly as your car flies over the embankment 120 km an hour.

Homicide by the car. Open your pod bay doors, Hal.

This seems far removed from current reality, but it might not be as far as we think.

Cyberattacks are on the rise in the traditional automotive industry in recent years, as cars become more dependent on circuits and electronics as opposed to mechanics and gaskets.

Connectivity defines the modern vehicle. With some cars containing over 100 million lines of code and processing nearly 25GB of data per hour, computerization radically reimagines mobility – enabling telematics, infotainment and autonomous drive capabilities that were unthinkable barely a decade ago. This software-ized transformation, securing IT components against cyber risks grows ever-more vital. As showcased by researchers commandeering functions like braking and steering via consumer Wi-Fi or compromised infotainment apps, hackers now have pathways into safety-critical vehicle controls. Highly automated models promise even larger attack surfaces.

In the future, mechanics will be phased out by electronic engineers to fix cars. You would go to an electronic shop instead of a mechanic shop. Say goodbye to the toothy uncle with the towel around his shoulder shaking his leg in his greasy shirt.

Bearing this in mind, the Japanese automotive industry is making serious efforts to improve cybersecurity. The Japan Automobile Manufacturers Association (JAMA) and the Japan Auto Parts Industries Association (JAPIA) both formed cybersecurity working groups. These two collaborated in 2019 to develop the JAMA/JAPIA Cybersecurity Guidelines, and on March 31, 2022, a second version was released to help steer the industry toward a more cyber-resilient course. Spanning 156 requirements aligned to internationally recognized standards, the guidelines furnish a sector-specific blueprint for fortifying defenses.

Who Do the Guidelines Target?

Given deepening connectivity between various players, the guidelines take broad aim across the mobility ecosystem:

  • Automobile manufacturers
  • Major Tier 1 parts suppliers
  • Software and semiconductor vendors tightly integrated into products
  • Telecommunications carriers facilitating connectivity
  • Fleet operations centers managing vehicle data
  • Components manufacturers farther down supply tiers
  • Aftermarket service providers accessing internal buses
  • Dealership networks bridging manufacturers and consumers
  • Academic partners feeding talent pipelines

Essentially, any entity handling sensitive intellectual property or providing critical products/services supporting vehicle R&D, manufacturing, sales, maintenance or communications should adhere to the prescribed cyber controls. This is fairly normal, like other standards out there, sub-contractors usually take the hit, as these standards are pushed down from the top.

While the guidelines focus on securing corporate IT environments, they spotlight risks from increasing convergence of enterprise and industrial assets. As connected platforms, analytics and cloud infrastructures provide gateway for adversaries into production systems, shoring up corporate IT protection grows imperative.

Three-Year Roadmap for Enhancing Cybersecurity Posture

Given the significant dedication for properly implementing comprehensive cybersecurity management programs, requirements are divided into three priority tiers reflecting basic, intermediate and advanced measures. The purpose of this is to demonstrate the minimum necessary countermeasures that must be used regardless of company size. This division allows organizations to methodically elevate security stature over a three-year adoption roadmap:

Level 1 – Basic Security Hygiene (Mandatory):

The 35+ non-negotiable Level 1 controls target universals like access management, malware defenses, monitoring fundamentals, compliance auditing, encryption, and security training. These form basic cyber hygiene mandatory across all auto sector entities. These requirements are intended to build a chain of security and trust between companies and their business partners and are also applicable to small and medium-sized enterprises. Non automative industry might do well to also use some of these as baseline cybersecurity practices. It’s basically cybersecurity hygiene. And we all know Japan has the best hygiene in the world, right?

Level 2 – Best Practices (2 Years):

An additional 60+ intermediate requirements call out data protection expansions, enhanced monitoring/logging, vulnerability management, security testing and supply chain risk management practices. Deeper employee training and executive awareness campaigns also feature.

Firms handling sensitive IP or high transaction volumes are expected to adopt Level 1 and 2 guidelines covering both foundational and sector-specific heightened risk areas within two years.

Companies should implement these controls, especially if they meet one of the following conditions:

1. Companies handling external confidential information (technical, customer information, etc.) within the supply chain.

2. Companies with significant internal technology/information relevant to the automotive industry.

3. Companies with a reasonable size/share that could have a significant impact on the industry supply chain due to unexpected disruptions.

Level 3 – Advanced Protections (3 Years):

Finally, over 50 sophisticated measures comprise the advanced tier targeting state-of-the-art safeguards. Encryption ubiquity, advanced behavioral monitoring, automated validation testing, penetration assessments and further elevation of risk management programs defined here help drive the industry’s cybermaturity.

These practices showcase leadership, with Level 3 representing an ultimate target for manufacturers expected to benchmark sector-wide security.

Built-in Flexibility Accounts for Organization Size

The tiered model acknowledges the varying cybersecurity investment capabilities across the industry landscape. This allows smaller players an achievable Level 1 entry point before working toward the expanded Layer 2 and 3 guidelines on a timeline proportional to organizational size and risk.

Again, in comparison to standards like PCI-DSS that also adopts similar tiered approach for compliance, this makes sense, given the number of different entities affected by this standard.

Checklist Format Provides Clear Milestones for Growth

To ease adoption, requirements trace to numbered checkpoints within a detailed appendix. This enumerated format lets companies definitively benchmark postures against guidelines and methodically strengthen defenses while tracking progress.

Shared criteria similarly help suppliers demonstrate security improvements to automaker customers through consistent maturity evaluations, facilitating trust in the supply chain.

Guidance Tuned to Automotive Sector Risk Landscape

Along with staging requirements by attainability, guidelines tailor controls and concepts to risks distinct from other industries. While mapping extensively to internationally recognized standards like NIST and ISO27K, authors customized content to the sector’s specialized threats and priorities.

For example, Level 1 mandates continuous monitoring for unauthorized access or malware activity. This acknowledges the havoc potential of a breach within an interconnected web of automakers, parts suppliers and assembly lines. Different secure zones and security focuses blur the lines on whether if (or when) a breach occurs, whose problem is that, how do we track it?

The repeated emphasis on supply chain oversight, information exchange standards and third-party security likewise reflects the complex hand-offs and trust relationships fundamental to mobility ecosystem operations.

Build Cyber Resilience Across Fragmented Environments

As vehicles evolve into software-defined platforms, cyber principles growing from these Japanese guidelines can shape sector-wide baseline resilience. Automotive IT interconnectivity will only intensify, making comprehensive, unified cybersecurity strategy essential. The scenario of the killer SUV may still be well into the future, but everything starts somewhere and as the world move more into the electronic and artificial, so too our dependence on everyday technology that we take for granted.

Whether global manufacturer or tiny niche parts maker, each player shares responsibility for hardening the greater environment. Just as drivetrains integrate thousands of precision components into harmonized mechanical systems, robust digital defenses emerge from many entities working in synch.

Implementing defined building blocks now allows the industry to preemptively navigate obstacles that could imperil revolutionary mobility pursuits ahead. For those seeking secure footing in the auto sector’s cyber journey, this three-year roadmap paves a straight path forward. This isn’t just for Japanese companies, but for any company whether in Malaysia or other regions that does business with Japanese automakers. This is a clarion call to the industry that cybersecurity should be foremost in the board’s agenda. Contact us at avantedge@pkfmalaysia.com and we will immediately get back to you. With our Japanese auditor and implementation partners, we can assist you in any way you want in navigating this standard.

Unless of course, you are in your Killer Suv. In that case, we can’t navigate that. Good luck!

What the FIM is going on

If you have been doing PCI-DSS for some years, you have probably come across this term called FIM (File Integrity Montioring), which sometimes absolutely befuddles our customers. They generally think this is part of a wider SIEM or SOAR solution but not necessarily so. We’ll explore a little on why FIM is important, how it impacts PCI-DSS, some examples on configuration and what alternatives are there (if any). Here we go!

File Integrity Monitoring is the process of validating the integrity of operating system and application software files. It ensures that files have not been altered or compromised, whether maliciously or accidentally.

  1. Detecting Unauthorized Changes: FIM helps in detecting unauthorized changes to critical system files, configurations, and content files. These changes could be indicative of a breach, malware infection, or insider threat.
  2. Compliance Requirements: Many regulatory standards, such as PCI-DSS, HIPAA, and SOX, require FIM as part of their compliance criteria. It ensures that sensitive data is protected and that the integrity of the system is maintained.
  3. Preventing Data Breaches: By monitoring file changes, FIM can provide early warning signs of a potential data breach. It allows organizations to take proactive measures to prevent unauthorized access to sensitive information.
  4. Enhancing Forensic Analysis: FIM provides detailed logs of file changes, aiding in forensic analysis. It helps in understanding the nature of an attack, the affected files, and the potential impact.

Let’s pause for now and see if common Antivirus/antimalware can take over this compliance requirement without deploying a specific FIM. Why? Because all companies generally have some sort of anti-virus running in their systems and all companies are stingy in their compliance spending, so part of our job is to see if current technology can be sufficient to address compliance requirements. The difference between Anti virus and FIM boils down to the reason of their existence, their meaning to life and everything. Its 42!

While FIM focuses on monitoring the integrity of files, antivirus and antimalware solutions are designed to detect and remove malicious software.

  • Antivirus: Primarily targets known viruses and relies on signature-based detection. It may not detect unauthorized changes to files unless they are associated with a known virus signature.
  • Antimalware: Broader in scope, antimalware solutions target various malicious software, including viruses, spyware, and ransomware. Like antivirus, it may not detect subtle unauthorized file changes.

FIM complements these solutions by providing an additional layer of security, focusing on the integrity of files rather than just malicious content.

FIM also differs from Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions. That being said, its common that these systems are bundled along with FIM solutions so while it’s possible that SIEM may have FIM, it might not be true that FIM has SIEM. They are like, maybe a dysfunctional family who sometimes get together over Chinese New Year reunions.

  • SIEM: SIEM solutions collect and analyze log data from various sources to provide real-time analysis of security alerts. While SIEM can include FIM as a component, it encompasses a broader range of security monitoring functions.
  • SOAR: SOAR solutions focus on automating and orchestrating security operations. They help in coordinating various security tools and processes. Unlike FIM, which is more focused on file integrity, SOAR aims to streamline security operations and response.

FIM makes its appearance in PCI-DSS v4.0 in requirement 10, specifically 10.2, 10.3,10.4,10.5,10.7 and further on in 11.5, 12.10 and A3.5.1.

In 10.2, PCI basically wants FIM to be part of the logging requirements in terms of what to capture, retention, response and so on. Make sure your FIM is configured to monitor the critical files, and the details of the FIM logs has user and process details, who is responsible for the change event and captured in real time. Ensure alerts are generated for change events by privileged accounts which can be further correlated to create an automated incident. Also, make sure changes to log file security settings or removal of log files triggers real time alerts, with exhaustive event details. All creation and deletion activities are captured as well, and all event details must be as per 10.2.2 for the FIM log files.

10.3.4 makes specific mention of FIM but there is some confusion to this requirement ” File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data cannot be changed without generating alerts. “. Obviously if you try to monitor for changes in a log file and alert everytime that file is changed, your SIEM or SOAR will light up like Christmas. Because of the nature of log files, it is supposed to change! So to avoid the noise, ensure log files are monitored for changes in security settings, like permissions or ownership. If a log file is deleted, that is also an anomaly. And for those logs that are archived or digitally signed, if any changes are made to these, then your FIM should be able to detect it.

Requirement 11 doesn’t change much for V4.0 — it is the main portion for FIM in 11.5.2 and it remains pretty much the same. Requirement 12.10.5 does provide an explicit requirement to include FIM alerts into incident management and response. But you know that already, right?

There are plenty of FIM solutions out there. The common ones we see is OSSEC which is deployed together with Alienvault previously. Tripwire is also a well known name in the FIM arena. If you want to explore the inbuilt Linux version of FIM, auditd might be worth your time. For those unfamiliar with auditd, it’s a component that provides auditing functionality for the Linux kernel. It’s widely used for security monitoring, system troubleshooting, and compliance reporting. Configuring auditd might be intimidating to some at first, but here’s some rules to get you started, found in this link

https://github.com/linux-audit/audit-userspace/blob/master/rules/30-pci-dss-v31.rules

In summary, it covers the following areas (config has been omitted in this article, you can go to the site to get the details)

  1. User Access Linking (10.1): Implicitly met by the audit system.
  2. User Access to Cardholder Data (10.2.1): Requires a watch on the database, excluding daemon access. (Path to the database must be specified.)
  3. Logging Administrative Actions (10.2.2): Enable tty logging for su and sudo. Special cases for systemd-run and pkexec are included.
  4. Monitoring Privilege Escalation Configuration (10.2.2): Watches changes to /etc/sudoers and /etc/sudoers.d/.
  5. Access to Audit Trails (10.2.3): Monitors access to /var/log/audit/ and specific audit commands.
  6. Invalid Logical Access Attempts (10.2.4): Naturally met by PAM.
  7. Logging of Identification & Authentication (I&A) Mechanisms (10.2.5.a): Handled by PAM.
  8. Logging of Privilege Elevation (10.2.5.b): Monitors specific syscalls related to privilege elevation.
  9. Logging Account Changes (10.2.5.c): Watches changes to account-related files like /etc/group, /etc/passwd, etc.
  10. Time Data Protection (10.4.2b): Places rules to check time synchronization.
  11. Securing Audit Trails (10.5): Includes various measures to protect audit logs, limit viewing, prevent unauthorized modifications, back up files, and monitor log modifications.

So, there you go. Lastly, though since PCI v4.0 came out, the council seem to have made distinction of change detection mechanisms vs File integrity monitoring, stating that FIM is part of CDM, sort of like a subset. I suppose this gives a little more leeway for companies to implement other types of CDM other than FIM, although FIM is probably the only one that can address all the above requirements comprehensively and without any need for compensating controls. But just for some ideas, the below may be a list of other CDMs that can possibly address the FIM functionalities in part, automated or manual:

  1. Version Control Systems: These systems track changes to files and code within a development environment. They allow developers to see what was changed, who changed it, and why. Tools like Git, Subversion, and Mercurial are examples of version control systems that provide change detection.
  2. Database Monitoring Tools: These tools monitor changes to database schemas, configurations, and content. They can alert administrators to unauthorized alterations, additions, or deletions within the database. Tools like Redgate SQL Monitor or Oracle Audit Vault are examples.
  3. Configuration Management Tools: Configuration management tools like Ansible, Puppet, and Chef can detect changes in system configurations. They ensure that systems are consistently configured according to predefined policies and can alert administrators to unauthorized changes.
  4. Network Anomaly Detection Systems: These systems monitor network behavior and alert to changes that may indicate a security threat. They can detect changes in traffic patterns, unusual login attempts, or alterations to network configurations.
  5. Endpoint Detection and Response (EDR) Solutions: EDR solutions monitor endpoints for signs of malicious activities and changes. They can detect changes in system behavior, file activities, and registry settings, providing a broader view of potential security incidents.
  6. Log Monitoring and Analysis Tools: Tools like Splunk or LogRhythm analyze log files from various sources to detect changes in system behavior, user activities, or security settings. They can provide real-time alerts for suspicious changes.
  7. Digital Signature Verification: Some systems use digital signatures to verify the integrity of files and data. Any alteration to the digitally signed content would cause a verification failure, alerting to a potential unauthorized change.
  8. Cloud Security Tools: With the rise of cloud computing, tools like AWS Config or Azure Security Center provide change detection for cloud resources. They monitor configurations, permissions, and activities within the cloud environment.

Again, we would highly recommend that a FIM be used, but in the case where it is not possible in that environment, for instance Cloud environment, then other CDMs can be possible. If you need to know more about FIM and PCI or any compliance in general, drop us a note at pcidss@pkfmalaysia.com and we will get back to you immediately!

« Older posts

© 2024 PKF AvantEdge

Up ↑