Category: PKF Avant Edge (Page 7 of 18)

The Art of Compensating Controls

In our many advisories over the years with companies going through PCI, some of the challenges we face include this little thing called compensating controls.

It’s a PCI term in many sense. Same as ‘segmentation penetration testing’. It’s when you can’t address the actual control in PCI-DSS for a reason and you need to put other controls to address the ‘spirit’ of the control. The spirit means: why did PCI-DSS put that control in, in the first place?

Immediately, this becomes an outlet of some of the greatest creativity ever existed in our technology field. If everyone involved can channel such creativity of controls into developing innovative tech, we would be far ahead achieving the technology vision 2020 that our country is lagging behind in. As they say: Necessity becomes the mother of invention. So everyone starts inventing controls.

Unfortunately such invention of controls in thin air will likely face a bounced rejection from the QSA which would further cause stress due to the impending deadline and the admission that the compliance will be delayed. So it’s in everyone’s interest that compensating controls are done correctly from the beginning.

Now this article’s purpose is not to go through the whole list of what compensating controls are. There are already thousands of articles that do that. Suffice to say, a compensating control is when you cannot meet the PCI requirements and you still want to be compliant.

So to make it simple:

a. Write which control you can’t address. It may be logging and monitoring, complex password etc.

b. Why can’t you do it? Some good reasons are that the legacy system that runs on the factory store churning out a thousand printed statements a minute, that cannot be patched due to patches no longer being produced. A bad reason is, Bob is new to the company and has no clue what does patching means.

c. Risk Analysis. This isn’t something natural many companies do for IT, but it has to be done. If the system cannot be patched what are the risks? Infection of malware? Internal exploit? Availability problem? Loss of power?

d. Document the control. This has to be done. You can’t just go and say, “OK, QSA, trust us, it’s in.” It needs to be detailed procedures on how the organisation will carry out these compensating controls.

e. Ensure it’s implemented and validate it with the QSA. In fact, we would suggest to bring in the QSA early in the process. Since they are the ones validating your controls, it makes sense to onboard them with the fact that you are doing compensating controls. What you may find acceptable to your risk may not be acceptable to the QSA. It’s not a matter of, “Hey, let me bear the risk this year!”. It’s a matter of “If this thing goes south, who else is going to be affected?” – QSA, banks, companies – because credit card information isn’t just the domain of the company handling it – it has upstream and downstream repercussions – from the payment brands, banks, acquirers TPA, service providers to the customers.

To be honest, compensating is a major pain in the butt. There is no way to describe it better. It’s actually worse that the actual controls. Plus its not a given each year – QSA may decide due to next year’s evolving risks your controls are no longer acceptable!

An example here: say you can’t patch your pre-historic system for a good reason. The vendor has since announced they no longer support that system and a new release is imminent in a year’s time and the notice is for all customers to bear down and wait for the release. At this moment it’s completely out of the customer’s hands.

The compensating controls could be

a) Having a documented notice from the vendor that security patching cannot be done

b) Hosting the system in an isolated VLAN that does not have any other CDE/non-cde systems

c) MFA needs for ALL access, not just administrative

d) Firewall rules must be specific to port/source/destination

e) Copy/paste, USB etc are disallowed in said system, and any attempts to do that is logged through DLP.

f) Antimalware must be installed and logs monitored specifically

g) Logging and monitoring needs to be reviewed daily specifically for this system and a report daily separately for incidents to this system

Taking a look at the above examples, these are controls that are considered above and beyond what is necessary for PCI requirements. In short, its a lot harder or more expensive to get compensating controls done than for the actual controls, no matter what you may think.

We once had a conversation with a company who were thinking of switching to us from previous consultants. When asked, we realised that many of their systems were not PCI compliant and they had put in compensating controls. Their compensating controls were: “Mitigation plan is in place to replace these systems in a year’s time.”

That’s not a compensating control. That’s something you plan to do in 365 days time. In the meantime, what are you proposing to lower the risk? That’s your compensating controls.

Don’t use the compensating controls as a get out of jail free card. Any consultant/QSA worth their salt would know how difficult it is to get these controls done and more importantly passed for PCI-DSS. In other words, instead of looking at it as a convenient shortcut or workaround, it should be viewed as the last resort.

Drop us a note at pcidss@pkfmalaysia.com for any queries you may have on PCI-DSS and we will respond to that immediately!

Alienvault USM – Flat File Log Capture – Part 1

We’ve been working with and on Alienvault since the beginning of 2016 and a lot has changed since then. When we started out with Alienvault, they were a small-ish company still, with big ambitions, working with a very technical group out of Cork, Ireland. We had direct access to their technical team (I think even to one engineer) and the amount of knowledge we got from those early days are pretty much invaluable to where we are right now. Of course, Alienvault has changed a lot since then, and now being part of AT&T – for the record, we believe they have the right roadmap to go into cloud with their USM Anywhere concept, and their product right now is much more robust and enterprise ready. They are on the right trajectory.

However, back in the days, for Alienvault USM Appliance (not Anywhere), which is their Appliance offering, we could literally ‘jailbreak’ the system and go into the underlying OS and do stuff to Alienvault that we can’t do anymore in Anywhere. Some of the changes we made were to increase optimisation, put in our own scripts to clean up the system, troubleshoot the system and of course, create plugins for custom applications. We would write custom plugins in 1 – 2 days for multiple applications because of deadlines, I remember and had to do so much in so little time – but we did it anyway. We had to write a plugin for one of the oldest mainframes for a financial institution that was so difficult to interpret, we had to dig up old manuals to sort out the entries for log and events. It was like we were interpreting Egyptian hieroglyphs. But that’s what it took – 2 days, I think because of compliance requirements and customer breathing down our neck to get it done.

Writing plugins was the easier part of the battle – in some old machines or legacy applications, getting the logs was the problem. If Alienvault doesn’t get the logs, it can’t do anything with it. One solution was to leverage on the HIDs (Host IDS), or OSSEC as it was known, to grab log files from systems. It wasn’t so elegant, and we still had to end up writing plugins for it to normalise, but it resolve the issue where application was not able to forward logs to the SIEM, or not able to write the logs to the Windows Event service, or any other way to get logs out to a syslogger. So the solution here is for the application to just write the logs to a file, and Alienvault go ahead and grab this and interpret it. It may not be real time, but it works.

There’s a good write up over in Alienvault at
https://www.alienvault.com/documentation/usm-appliance/ids-configuration/process-reading-log-file-with-hids-agent-windows.htm. So a lot of it is just a repeat and probably an exposition on why we are doing certain things in a certain way.

So the first thing to do here is to ensure that you are able to install HIDs on the server. HIDs will be the key to get this file out to Alienvault. Technically, you could actually use NXLog as well but let’s explore that another time.

Once HIDs is installed, get into the ossec configuration file to define the <localfile> location. Now assuming that you have configured your application to write to a flat file called database.log.txt.

Go ahead and restart OSSEC. That’s pretty much what you need to do to start off so it’s pretty simple.

The rest of it is all done on Alienvault.

To summarise the steps:

Enable “logall” on your USM Appliance. You want to dump whatever you are getting in that flat file database log to a log inside your Alienvault so you can start doing stuff to it. In this case, in your AV User Interface:

Environment > Detection > HIDS > Config > Configuration.

Add <logall>yes</logall> to the <global> section of the file .

You are dumping these logs into /var/ossec/logs/archives/archives.log.

Restart the HIDs service through UI.

You should be able to see new logs coming into archives.log. Just do a tail -f on it, edit the log file (database.log.txt) in your remote system (just write something on it) and see if it appears in your archives.log. Once you see it, you are almost done. Very simple.

So for now, you have customised logs coming into your Alienvault. The next thing to do is to interpret these logs and make sure events are able to be derived from these logs to something that is useful to you!

Drop us an email at alienvault@pkfmalaysia.com for more information on Alienvault or any technical queries you have, and we will attend to it.

PCI-DSS Full Disk Encryption Part 1

In PCI-DSS, one of the most difficult requirement to get through would be Requirement 3, that deals with stored credit card information and how to protect it. Aside from Requirement 10: Logging and Requirement 6: Software, Requirement 3: Storage makes up a bulk of the remediation effort and cost of PCI-DSS.

The excerpt ominously states at the beginning: Protection methods such as encryption, truncation, masking, and hashing are critical components of cardholder data protection. If an intruder circumvents other security controls and gains access to encrypted data, without the proper cryptographic keys, the data is unreadable and unusable to that person. Other effective methods of protecting stored data should also be considered as potential risk mitigation opportunities. For example, methods for minimizing risk include not storing cardholder data unless absolutely necessary, truncating cardholder data if full PAN is not needed, and not sending unprotected PANs using end-user messaging technologies, such as e-mail and instant messaging.

It goes without saying that if you have credit card information on file for whatever reason, it would be a good time to relook at the necessity of it. If you don’t need it, get rid of it, because the cost of maintenance and remediation may not be worth whatever value you think you are obtaining from storage of card data.

If you do need it, well, PCI provides a few options for you to protect it: Encryption, Truncation, Masking and Hashing. In this series of articles we will be looking into encryption and more specifically Full Disk Encryption.

Encryption itself deserves a long drawn out discussion and the types of encryption – you have applications doing encryption through the application library, you have database encryption like TDE, you have file encryption or folder encryption, you have full disk encryption. One part is the encryption methodology. The other part of it is the encryption key management. The latter is the one that usually throws up a headache.

We will be exploring Full Disk Encryption or FDE, and where it can be implemented to comply to PCI-DSS.

There is a specific part in 3.4.1 stating:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

So aside from the encryption being strong encryption and key management being done properly, PCI says, there are a few more things to be aware of for full disk encryption:

a) Logical access must be separate and independent of the native OS authentication

b) Decryption key must not be associated with the user account.

What does this mean?

Let’s look at Bitlocker for now, since that’s everyone’s favourite example.

Bitlocker has gone through a lot of stick probably because it’s a native Microsoft offering. Maybe. I don’t know. The fact is Bitlocker is able to use 128 or 256 bit AES so basically, in terms of strong cryptography, it’s possible. It’s the key management that’s the issue.

For key management, the recommended usage with Bitlocker is to use the Trusted Platform Module version 1.2 or later. The TPM is a hardware in your server that somewhat acts like a key vault or key management module, to simplify it. It offers system verification to ensure there is no tampering of the system at startup. Beginning with Windows 10, version 1803, you can check TPM status in Windows Defender Security Center > Device Security > Security processor details. In previous versions of Windows, open the TPM MMC console (tpm.msc) and look under the Status heading.

Bitlocker can also be used without TPM, although that means the system integrity checks are bypassed. It can operate along with Active Directory, although the newer versions of bitlocker doesn’t store the password hash in AD anymore by default. Instead a recovery password can be stored in the AD if required.

With the TPM, it’s still not the end of it, because we need to make sure that there is a separation of authentication for bitlocker to operate. In this case we will look to configure it with a PIN (which essentially is a password that you know).

First of all, let’s see what at the end we should be seeing.

So at the end you are basically seeing both file systems being encrypted. I’ve been asked before if all volumes need to be encrypted, and the answer is no, because bitlocker can’t do that anyway. Your system drive can’t be encrypted. So for PCI, it makes sense NOT to store card data in drives that are not encrypted.

The next thing we need to check is to ensure your set up has fulfilled the strong encryption requirement of PCI-DSS:

So you have a few things to ensure that strong crypto is enabled and key protectors are in place. So what you have is bitlocker now enabled. You also basically need to ensure you properly document the key management policy – include in AES256 or 128 that you are using, which drives are protected, key expiry date.

Keep in mind also the following:

FVEK (Full Volume Encryption Key) as DEK and VMK (Volume Master Key) as KEK.

FVEK stores in Boot sector (Volume meta data) in hard disk and VMK stores in TPM chip PCR register (it’s a Hardware chip which place in Motherboard).

In general, the above would fulfill PCI requirements. In our next article, we will write out on how logical access to the encrypted file system can be separated from the native OS authentication mechanism.

Meantime, please drop us any enquiries at pcidss@pkfmalaysia.com if you need to know more about PCI-DSS or any compliance matters in IT. We are here to help!

The Criticality of Project Management

Project management over the years have gone through somewhat of a bad rap for technology projects, especially. They always seem like a luxury afforded by management, and whenever things go south in a tech project, the first stop for blame is always on the project manager. It’s a tough life. On one hand you need to appease the forces that hold the budget (the business) and on the other, you need to deal with a bunch of geeks who are talking binary stuff and whom you know would rather not have you in the room because you don’t talk tech as much as them.

We used to have a Project Management Office, receiving work from other large projects looking for business analysts, project leaders, program managers etc. It’s not cheap upkeeping these guys, what’s with their PRINCE and PMP certifications and their training and hours. The problem was also when the project ended, then basically we had to go look for other projects to take them on. It’s an expensive affair, unless you have a constant pipeline of internal or external projects to keep them busy. The thing was, we noticed project managers tend to stay as project managers. You couldn’t get them to go into tech audits, or develop software or do compliance work. At least, for the ones we hired.

In the past, Project Managers are fairly agnostic in terms of technical capability. They have a set of domains they are good at (whether they are good at telco projects, compliance projects, migration projects), but overall, the discipline more or less remains constant. Methodologies used by these managers include lean, SCRUM, Agile etc, or simply PMI/PMBOK guidelines, which some of our managers tend to gravitate to. But aside from this basic competency of managers, there is inherently a personality that project managers need to have. Leadership is obvious, decision making capability, the ability to stand strong when being questioned and able to communicate the project properly. The ability to pull people together, from technical to consultants to internal business, and yes, the inherent charisma that one must have to become a successful project manager. He or she needn’t be the most technical in the room, but they must be able to sniff bullshit and weed it out. Time, budget and quality are the basic triangles of forces that need to be met, and good project managers are aware of this.

Due to cost and lack of demand, we shuttered our PMO a few years back, but our guys still practice basic PM work in our compliance project, and in some smaller companies, we actually end up taking the informal role of the project leads. We wouldn’t call ourselves project managers, because not everyone who calls themselves project managers are actually project managers. However, for larger companies, we do defer to the project manager in charge, and in our time we have had some experience with some of the best in the business, and some of the absolute worst. The problem is because being a good PM or absolute garbage is so difficult to assess.

It MAKES A HUGE difference who you put as a project manager. It spells either success or complete doom to your project the moment you assign a good or a garbage project manager.

For a compliance like PCI-DSS, there are some specific traits a manager should have, as PCI is a fairly technical project. And most PCI projects tend to drag on past 4 months or so. Some even a year plus. It does require a fair bit of technical knowledge, persistence and goodwill to successfully manage the project. Here are some of what we observed, and having experience good ones, and the bottom of the barrel type of project managers, we can probably give a fair opinion of what are the points of success (between good manager (GM) and hapless manager (HM)):

a) Technical Capability

This is more of a trait than a skill.

The GM know they don’t need to be experts, but they also know they need to put their backs and time into understanding the whole thing and trying to absorb the technical matters of it. They would attend training sessions and they would ask very good questions. The hapless managers go: OK, everyone knows their spot here. Consultants, I will look to you to answer all PCI related questions. I am here to gather information for all parties, so I want everyone to come for every meeting we are going to have moving forward.

The hapless one basically just comes in, fires off a few questions on project matters, and then sidles down and constantly have a far away look in their eyes when we start talking about the project tasks and updates. Or glued to their phones or laptop, furiously typing out stuff with their brows knotted up. Their strategy is that everyone else will carry their own load so they don’t need to know anything technical because they are too busy with other more important things, like buying food for their cats online. Occasionally, they bark out some orders here and there but you can tell, they know jackshit. After 4 – 5 sessions, they are still clueless and that’s when they start losing grasp of reality, and if the consultants are not available, the whole project is stuck, and then they move into the stage of looking to blame people for their ineptitude. Oh yeah. We have had plenty of these experiences for sure.

b) Communication

This seems a given, and a good manager ensures everyone is on the ball and the scoreboard is known to all. They know how to manage downlines (the people that need to get things done), horizonlines (the peers who are managing other downlines) and uplines (the business or sponsors pressuring the project). This innate ability isn’t bestowed on the hapless one. The hapless manager’s basic modus operandi is to take whatever the team gives, and being questioned by uplines and peers, decide that they don’t know how to explain it and comes back to the team again to ask for more information on how to deal with the questions. There is a complete lack of awareness in these managers that they are unable to overcome. They are unable to argue their points succinctly and always give in when there is pressure. Because of their lack of skill and understanding, they have no clue what positions to take and often waste the entire project timeline by going back and forth hopelessly like grass (or lalang) swaying in the wind.

c) Responsibility

One of the true strengths of character is when things are not going right, the good ones take up the responsibility of the situation and face the issues head on. The hapless ones find a way out, and find a way to blame others. To them, it’s always someone else at fault and never them. This stems from their utter lack of confidence in the project, that the only way they can reverse the situation is by saying, “It’s not my fault.” They usually will turn to consultants, as they are external to the company, and seek to pin the blame on them. It’s tough, but it is what it is. Most companies, given the choice of an external party and an internal person, would side with their own regardless of facts.

d) Time Management

The LLB (Look Like Busy) Trait is a big problem with these hapless managers. Because of their lack of a), b) and c) above, they are running around like headless chickens, being pulled from one meeting to another, unable to resolve any issues properly. So their heads are constantly in their phones or laptops instead of properly leading the project. Firefighting, or looking to assign blame. You can also tell when they are not able to manage meeting times. Many times, we have received calls from project managers requesting either immediate meeting at their office, or to come onsite within the next day and they wail because we tell them we are either overseas or assigned to other audits and we can do a phone. Most don’t understand that (unless we are properly paid and engaged), we are not their outsourced compliance unit so they blame us for non commitment. We are their consultants and there is no service level that requires us to stay in the clients office all the time for their beck and call. Unless, again, if they pay us, but most don’t pay for consultants to sit down and wait for inept project managers to scramble around looking for ad-hoc meetings.

Because they are scrambling and blaming instead of working,these PMs now think they are utterly important because they are so busy, but the fact is because of the ineptitude, they are being forced to seek responsibility, communicate or have technical explanation of the project – all which they are unable to do. So it’s one excruciating, meaningless and useless meeting after another. It’s horrible to exist in that manner for a career, but we’ve seen this many times.

Once you solve a), b) and c), Time Management solves itself.

Bonus points: While this may not be always true, the way project managers approach meetings and projects can actually say a lot. If a PMP or PRINCE PM comes in, there is usually a methodology on the table, tools and actual project management software they utilise for reporting. They are able to standardise our reports to a point where it goes straight to the point and to what they know their uplines need to know. Some hapless PM comes in, not certified in anything, not having knowledge of any tools, software or methodology, but basically armed with an excel sheet they took from another project manager who took from another project manager who used it to make sandwiches. That’s how senseless we see some of these methods and tools sometimes an we just look at everyone across the table and everyone goes like: “What is going on?”

In conclusion, never underestimate the importance of Project Managers, especially in a long drawn project like PCI-DSS. While we have known some excellent ones in our time, we have also worked with yahoos out there that single-handedly managed to trainwreck projects. From this article, it may seem our experience is more on the latter, but the opposite is true – we have the privilege to have worked with some really excellent ones that have also helped us get better, over these years. They are absolutely precious resources in a project, trust me. It’s just that when we do face one or two hapless PMs, it stands out a little bit more because we are so used to working with good ones!

Yes, we have shuttered our PMO as an advisory a few years back, but we also recognise the need for great PMs that might be able to help us out in our projects. If there is any interest, drop us a note at avantedge@pkfmalaysia.com and we will get in touch wth you.

PCI-DSS: Internal Audit Signoffs

After going through previously the nightmare of PCI-DSS Certificates, as described with considerable detail in our writeup previously, we are now faced with a new phenomenon called the Internal Audit Signoff, which is further confusing our clients.

OK, first of all, let’s do a brief recap.

How are 3 ways that PCI-DSS can be validated?

Answer :

  1. Full Report of Compliance (RoC) from QSA – Level 1 Service Providers, Level 1 Merchants
  2. Self Assessment Questionnaire (SAQ) signed off by QSA/ISA – Level 2 Merchants, (Maybe) Level 2 Service Providers
  3. Self Assessment Questionnaire (SAQ) signed off only by Merchant/Service provider – Level 3,4 Merchant, (Maybe) Level 2 Service providers

Those are the 3 endgames for PCI. And of course, the end scenario called Failure, or non-Compliance. But that isn’t cool, unless you are the type who is happy with Thanos snapping his fingers being the definite end to all things.

Now we all know item 1) requires the participation of a third party QSA/ISA to signoff on the Report of Compliance and the Attestation of Compliance. ISA here is internal security auditor. We won’t touch it this round, because this requires a whole new library of articles to discuss.

Item 2) likewise requires a third party QSA/ISA to signoff on the Self Assessment Questionnaire and the Attestation of Compliance.

Item 3) is basically, self signed – not a lot of acquirers take this seriously as basically, its anyone signing off anything they feel like. There is no validation, and sometimes, it’s akin to the CxO sticking a finger to the tongue and putting it up in the air and going, “Yeah, that feels ’bout right. Let’s sign off and say we have these controls!”

Let’s talk about item 1 and item 2.

In item 1, it’s a gimme that the QSA needs to go onsite to the locations to do an audit. I have never heard of any QSA signing off on a full RoC without actually going onsite. Maybe when our tech reaches a point where the QSA can be holographically present in a location and see what’s there without being physically there like a Jedi Force Ghost, that the PCI-SSC would accept the signoff. But by then, we could probably just tell PCI-SSC that these aren’t the companies they are looking for, and then there’s no need to do PCI.

Until then – the question is for item 2, for the QSA to signoff the SAQ, must they be onsite or they can provide a remote signoff?

Now if you ask a QSA what is the difference between 1) and 2), they would say, not much – except they don’t have to waste their time writing the tome called the Report of Compliance (ROC) for level 2. Level 2 is basically a judgement made by the QSA based on existing evidences that what is stated in the SAQ is true, or at least as much as they can have reasonable assurance on. The SAQ is not a document written by the QSA, although they can help, but in this case they are validating it. For Level 1, it’s a different story. They have to write the RoC and the work put into that reporting phase is surprisingly a lot. In comparison, it’s probably like reviewing a first term essay paper written by your senior students (SAQ Validation) versus writing the Silmarillion including the index (RoC).

However, for QSAs to conduct their audit and provide a fair opinion on the controls, they will still want to be onsite for option 2), much to the chagrin of many of my customers. Their argument here is: “Hey I am level 2, why must you come onsite??” And again, the crescendo grows that a Level 2 should have less things to worry about than Level 1 – another myth as old as us telling our children not to sleep with wet hair or else they will wake up with a storming headache.

To get to the bottom of this, we got directly from the horse’s mouth (in this case from Mastercard SDP program response: “In this scenario (describing item 2) the QSA has to be onsite. The QSA cannot simply review a RoC or SAQ without being at the location to validate that controls are actually in place.”

To be fair, the above discussion was applied to L2 Merchants (Level 2 Merchants) – those making more than 1 million volume card transactions per annum. Whether the QSA is willing to take the risk and perform an offsite review for a Level 3 or level 4, I wouldn’t know – that’s up to the QSA and the card brands I suppose. But to be absolutely safe, we would advice that all levels should be treated as such – if you need a QSA to signoff, that QSA needs to be onsite to get it done. Or use the Jedi Force Ghost. Both are acceptable to PCI-SSC I am pretty sure.

So, as an illustration, we had a request from a company, requesting us, for their location, to get the QSA to signoff remotely. Because “The Other QSA did it for us and certified us”. The other QSA meaning someone they engaged earlier.

OK – this certification term again. I am sure that did not happen – but many use the word certification for anything: actual RoC, doing the SAQ with QSA, signoff on SAQ by themselves, getting ASV scan etc…those are typical scenarios we see this certification word being thrown.

Digging further, we received a worksheet which was a typical ‘Scope’ document (you know, where they ask what sort of merchant you are, what business, how many locations, devices, whether you store card etc), and the instruction was to fill this up, send it over to the QSA and the QSA will ‘sign off’ their PCI-DSS compliance, all within 2 weeks.

QSA certified within 2 weeks, remotely, and with just the scope document, without validating any controls? No penetration testing or ASV? No Risk assessment? No review of information security policy? How?

We asked for the copy of the official signoff page (Section 3c of the AoC) but instead we got a signoff on a report from QSA stating what was scoped in and what was scoped out of PCI-DSS. A typical scope document. It’s a useful document, but it’s not a document required by the PCI SSC. In fact it doesn’t serve any purpose other than to simply state what is in scope for PCI-DSS based on the scope questionnaire (not the SAQ) provided by the QSA.

I am 100% sure the QSA meant well by this, but the problem was, there are interpretation issues. We cannot expect clients to right off the bat understand PCI-DSS and all it’s seemingly malarkey documents – the AoC, the RoC, the 9 different SAQs, the ASV scans, the partridge in the pear tree etc. So when we asked for a SAQ signed off by QSA, of course, clients will fall back to any document being signed off by QSAs. That’s why we are not big fans of the practice where clients are provided by ASV certificates just because they passed their ASV scans. They all think they are PCI certified because they have a QSA signed off document which is the ASV ‘certificate’! And the same here goes, this is simply a scope review document – almost like an internal audit report, that does not make a company PCI compliant. In fact, it is just confirming that the company MUST be PCI compliant according to the scope set.

So the moral of this story is: Not all QSA-signed off documents are valid documents for PCI-DSS. ASV scans, while valid, doesn’t make you PCI compliant. It’s only a small percentage of what you must do. Internal Audits or scope reviews like the one we saw, even signed off by the QSA, are not valid PCI-DSS documents. They do not make you PCI compliant. As PCI has explicitly stated before, the only valid PCI-SSC documentation are the AoC, SAQ, RoC and ASV scan reports (not certificates, with flowery borders and impressive cursive fonts in gold). Anything else are supplementary materials used to support the compliance, not to validate it.

For more clarity on PCI, drop us an email at pcidss@pkfmalaysia.com. We will try to sort any issues you have, and yes, we are the company you are looking for.

« Older posts Newer posts »

© 2025 PKF AvantEdge

Up ↑