PKF Avant Edge is ISO27001:2022 certified!

We’re thrilled to share that PKF AVANT EDGE has successfully obtained ISO/IEC 27001:2022 certification, as of December 2024.

Over the past year we have committed ourselves to maintain the high standards of Information Security practices surrounding the management of our backend compliance management solution, AwanEdge Manager. In many instances, the question we want to answer was: If we are to give consultation and advisory on cybersecurity and compliance, what can we do to walk the talk? How do we empathise with our customers when we ourselves have not undertaken the journey they go through?

Empathy in our engagements means we don’t simply come to clients with a checklist of requirements of WHAT to do, but rather a deep understanding of HOW they will go through it. We can say, we’ve been there. We know.

Embarking on our Information Security Management System (ISMS) journey, we faced many challenges along the way, especially in getting our backend application and security up to speed. Documentation and procedures could be done, implementing these to the degree required by ISO27001’s stringent standard was another question. Many embark on their ISMS thinking that purchasing a deck of policies and procedures from a website would be sufficient to get them there.

Not even close.

After undergoing the arduous journey ourselves, we can attest to the amount of work ISMS certified companies put in — the late nights of getting everything in order, the changes we have to make in our procedures, the change management program undertaken by the organisation to embed these practices into our people, the implementation of actual security measures and the documentation following this implementation — this gives us the assurance even to ourselves that when we manage our customers’ information, we are managing it right.

This certification represents our pledge to:

  • Protect client data with industry best practices and security measures
  • Maintain the standards of cybersecurity we advocate in our customers
  • Continuously enhance our security frameworks

Special thanks to our dedicated team who made this achievement possible!

Drop us an email at avantedge@pkfmalaysia.com if you want to know more about ISO27001 and how we can assist you in your compliance. Have a great 2025 ahead!


A Brief History of SOC and SAS

Taking a bit of break from PCI articles, I want to touch a little bit on SOC or what we know as Service Organization Controls. Well, its actually now called System and Organization Controls but its really just a habit that’s hard to break as I keep calling it by its old name.

We’re getting asked more and more about SOC and I think it’s probably a good time to have a detailed series on this particular set of engagements. This one gets a menu of selections and although not as crazy as PCI-DSS, which resembles the local mixed rice shop with all its choices, SOC does come in a fair bit of flavor that you need to get used to.

Firstly, lets talk history.

Usually when discussing about SOC, we usually will go back to the 1990s where SAS (Statement of Auditing Standards) 70 was born. Perhaps we need to go a bit further back in time, just for nerd values.

The history of SOC Attestation begins in the 1970s with the rise of computer-based accounting systems. Yes, 1970s. Here is the VisiCalc, the great grand father of Excel. Absolutely adore the fonts and colors. I wish computer graphics were like this forever.

So for useless information, I always wondered if there was a SAS 70, there has to be an SAS 1. And it turned out there was! Back in 1972, but it did not really mention much about computers as back then, everything was likely still running on steam engines and horses with buggies. Oh wait, no that’s 1872. But still 1972 in computer terms would be considered as the stoneage. The immortal words of SAS 1 birthed the requirements we have today:

Since the definition and related basic concepts of accounting control are expressed in terms of objectives, they are independent of the method of data processing used; consequently, they apply equally to manual, mechanical, and electronic data processing systems. However, the organization and procedures required to accomplish those objectives may be influenced by the method of data processing used.

Section 320.33 of SAS No. 1

So back in the days, it wasn’t cool to say computers. You strut around campus with a cigar talking about Electronic Data Processing. That’s equivalent today to people discussing Quantum Physics.

As it turned out, the mention of controls for EDP (which is what SOC is all about) was spelled out even further in SAS 3, published in December 1974.

Because the method of data processing used may influence the organization and procedures employed by an entity to accomplish the objectives of accounting control, it may also influence the procedures employed by an auditor in his study and evaluation of accounting control to determine the nature, timing, and extent of audit procedures to be applied in his examination of financial statement.

SAS 3

In fact, SAS 3 is a fascinating bed time read that’s probably the first document to actually list down standards of auditing and controls that we still use today. Segregation of Functions. Execution and recording of transactions. Access to Assets. Reconciliation. Review of Systems. Testing of Compliance. Program change management.

In SAS 3, there is also a statement that will be borne throughout its successors and posterity:

The auditor may conclude that there are weaknesses in accounting control procedures in the EDP portions of the application or applications sufficient to preclude his reliance on such procedures. In that event, he would discontinue his review of those EDP accounting control procedures and forgo performing compliance tests related to those procedures; he would not be able to rely on those EDP accounting control procedures.

SAS 3 Section 26.b

This word, “rely” or “reliant” comes up quite a lot even today and we will be exploring it throughout this SOC series.

But we do need to move on, as much as I love digging up technology fossils, not everyone appreciate an occasional nerd out like we do. Right…so as businesses increasingly relied on these systems, there grew a need to ensure their reliability and security. This led to the development of early computer auditing standards. Back in those days, it wasn’t just the accountants getting a bit huffy over EDPs and what not, the U.S. National Bureau of Standards decided that AICPA shouldn’t be the only one weighing on these EDPs and they issued Publication 38 called “Guidelines for Documentation of Computer Programs and Automated Data Systems.”, an absolute page turner, written with beautiful typewriter font and printed in a gorgeously time-yellowed pages.

Focusing back on the venerable SAS standards, other publications – 48, 55, 60 in the 1980s threw their names into the hat to address the need to control service organizations. They talk and dabble about information technology and controls, communications, internal audit and all that, but they never really fit the bill until the greatest of all arrive: SAS 70, in 1992.

Listen, when I started out the consulting business in 2010, there were still residues of SAS 70 being talked about. This is how omnipotent this standard is when it came to stamping its mark in standards landscape. It’s worth noting that alongside these auditing standards, the information security field was developing its own frameworks and best practices during this period and we can talk about PCI, about ISO, about Webtrust etc another time. However, SAS 70 was particularly significant in bridging the gap between traditional financial auditing and the assessment of IT controls in service organizations. It was like a magical bridge between the number guys dressed in suits in windowless offices and the guy in T-shirt with beard sitting at the basement in front of his Unix.

Aside from its now famous use of the word “Service Organizations”, SAS70 provided guidance for auditing the controls of a service organization. It defined two types of reports:

Type I: Described the service organization’s controls at a specific point in time. Type II: Included the description of controls and tested their operating effectiveness over a minimum six-month period.

So essentially even back then there is already a requirement for a type II to be six months at minimum. In their words “Testing should be applied to controls in effect throughout the period covered by the report. To be useful to user auditors, the report should ordinarily cover a minimum reporting period of six months.”

The reason why SAS70 is now more well known than the rest of its predecessors or successors was because of:

Standardization: It provided a uniform approach to assessing and reporting on controls at service organizations. A lot of its sections are still being used today as guideposts for standards and auditing.

Third-party assurance: It allowed service organizations to provide assurance to their clients about their control environment through a single report, reducing redundant audits.

Relevance to IT services: As outsourcing and cloud services grew, SAS 70 became crucial for evaluating IT service providers.

Broad adoption: It was widely recognized across industries and even internationally.

SAS70 was the staple for auditing service organizations for many years after that, until like all good things, it came to an end. It’s younger, more hip replacement, called the SSAE 16 (Standards for Attestation Engagements) came in 2011. This was where the introduction of SOC1, SOC2 was done. We call it “Report Category” or “Report Kind” because we can’t call it report type because “Type” was already used, where Type 1 and Type 2 denoted the period of time and effectiveness, vs design. Like all good accountants, they simply stuck to the 1 and 2 to differentiate it, so we ended up with SOC1 Type I, SOC1 Type II, SOC2 Type I and SOC2 Type II. All very symmetrical, see.

SSAE 16 is the same as SOC 1 reports, which focused specifically on controls relevant to financial reporting. They call this “ICFR” – Internal controls for financial reporting. Two new report “kinds” also need to be noted:

  • SOC 2: Addressing controls relevant to security, availability, processing integrity, confidentiality, and privacy. (ISAE 3000 instead of ISAE 3402 or AT Section 101 instead of SSAE16)
  • SOC 3: A simplified version of SOC 2 for general public use.

Let’s not get started with SSAE’s distant international twin called International Standard on Assurance Engagements (ISAE) 3402, which is actually used here in Malaysia. Now again, don’t get too confused because the SSAE or the ISAE are specifically SOC1 reports, not SOC2. SOC1.

All these doesn’t actually matter much because eventually, SSAE16 was succeeded by SSAE18 in 2017 and now in 2020, SSAE21. SO essentially, to simplify, SOC1 or SOC2 reports are simply just another way of saying these are reports based on the SSAE or ISAE standards. So technically, if you want to be really pedantic, you can throw a fit when people say, I want to ‘certify’ to ‘SOC standards’! Because there is no such thing as a SOC standard and it’s not ‘certification’, it’s ‘attestation’!

But that’s another story for another article as I’ve hit my word count and surpassed it. If you want to know more about how we service our customers in SOC and don’t mind a bit of history, let us know at avantedge@pkfmalaysia.com. We’ll definitely get back to you!

PCI-DSS Scope Understanding: Encryption

Scoping is one of the first and main thing that we do the moment we get engaged, after the customary celebratory drinks. In all projects, scope is always key, moreso in auditing and consulting, in standards compliance, be it PCI, ISMS, NIST 800s, CSA or all the other compliances we end up doing. Without scope there is nothing. Or there is everything, if you are looking at it from the customer’s viewpoint. If boundaries are not set, then everything is in open season. Everything needs to be tested, prodded, penetrated, reviewed. While this is all good and all, projects are all bounded by cost, time and quality. Scope determines this.

In PCI, scoping became such a tremendously huge issue that the council deem it necessary to publish an entire supplemetary document called “Guidance for PCI DSS Scoping and Network Segmentation” back in December 2016. Now, here is a trivia for you, from someone who has been doing PCI for donkey years. Did you know that this isn’t even the first attempt at sorting out scope for PCI-DSS?

Back in 2012, there was a group Open Scoping Framework Group that published the extremely useful Open PCI-DSS Scoping Toolkit that we used for many years as guidance before the council amalgamated the information there into formal documentation. This was our go-to bible and a shout out to those brilliant folks at http://www.itrevolution.com for providing it, many of its original concepts retained when PCI council released their formal documentation on scope and eventually within the standards itself. YES, scoping is finally in the iteration of v4 and v4.0.1 for PCI-DSS in the first few pages, so that people will not get angry anymore.

Or will they?

We’re seeing a phenomenon more and more in the industry of what we term as scope creep. Ok fine, that’s not our word. It’s been in existence since the fall of Adam. Anyway, in PCI context, for no apparent reason some of our customers come back to us and state their consultants, or even QSAs insists on scope being included — for NO REASON except that it is MANDATORY for PCI-DSS. Now, I don’t want to say we have no skin in the game, but this is where I often end up arguing with even the QSAs we partner with. I tell them, “Look, our first job here is to help our customers. We minimize or optimize scope for them, reducing it to the most consumable portion possible, and if they want to do anything extra, let them decide on it. We’re not here to upsell Penetration testing. Or segmentation testing. Or Risk Assessment. Or ASV. Or Policies and Procedures. Or SIEM. Or SOC. Or Logging. Or a basket of guavas and durians.” Dang it, we are here to do one thing: get you PCI compliant and move on with our lives.

The trend we see now is that everything seems to be piled up to our clients to do this and to do that. In the words of one extremely frustrated customer: “Everytime we talk to this *** (name redacted), it seems they are trying to sell us something and getting something out of us, like we are some kind of golden goose.”

Now, obviously, if you are a QSA company and doing that:- STOP IT. Stop it. It’s not only naughty and bring disrepute to your other brethren in the industry, it’s frowned upon and considered against your QSA code! Look at the article here.

Now PCI scoping itself deserves a whole new series of articles but I just want to zoom down to a particular scoping scenario that we recently encountered. It’s in a merchant environment.

Now many of our merchants have either or both of these scopes: Card terminal to process card present at the stores and E-Commerce site. There is one particular customer with card terminal POI (point of interaction), or traditionally known as EDCs (Electronic Data capture). Basically this is where the customer comes, take out the physical card and dip/wave it to this device at the location of the stores. So yes, PCI is required for the merchant for the very fact that the stores have these devices that interact with cards. Now what happens after this?

Most EDCs have SIM based connectivity now, and it goes straight to the acquirer using ISO8583 messages. These are already encrypted on the terminal itself and routes through the telco network to the bank/acquirer for further processing. Other ways are through the store network, routing back to the headquarters and then out to the acquirer. There are reasons why this happens, of course, one would be the aggregation of stores to HQ allows more visibility on the transactions and analysis of traffic. The thing here is, the terminal messages are encrypted by the terminals, that the merchants do not have any access to the keys for decryption. This is important.

Now, what happened was that some QSAs have taken into their mind that because the traffic is routed through the HQ environment, the HQ gets pulled into scope. And therefore , this particular traffic must be properly segmented and then segmentation PT needs to be performed. This could potentially lead to a lot of issues, especially if your HQ environment is populated with a lot of different segments – it could constitute multiple, tiring, tedious testing by the merchant team….or it could constitute a profitable service done by your ‘service providers’ (Again, if these service providers happen to be your QSA, you can see where the question of upsell and independence come from).

Now here’s the crux. We hear these merchants telling us, oh, their consultant or QSA say that it’s mandatory for segmentation PT to occur in this HQ environment. The reasoning is that there is card data flowing through it. Regardless whether it is encrypted or not, as long as there is card data, IT IS IN SCOPE. Segmentation PT MUST BE DONE.

But. Is it though?

The whole point of segmentation PT is that it demarcates out of scope to in-scope. By insisting to have segmentation PT done, is to concede that there is an IN-SCOPE segment or environment in the HQ. The smug QSA nods, as he sagely says, “Well, as QSAs, we are the judge, jury and executioner. I say there is an in scope, regardless of encryption.”

So, we look at the PCI SSC and the standards, and let’s see. QSAs will point to page 14 of PCI-DSS v4.0 standards under “Encrypted Cardholder Data and Impact on PCI DSS Scope”.

Encryption alone is generally insufficient to render the cardholder data out of scope for PCI DSS and does not remove the need for PCI DSS in that environment.

PCI-DSS v4.0.1 by a SMILING QSA RUBBING PALMS TOGETHER

Let’s read further this wonderful excerpt:

The entity’s environment is still in scope for PCI DSS due to the presence of cardholder data. For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and
there may also be paper reports or receipts with cardholder data. Similarly, in merchant card-not-present environments, such as mailorder/telephone-order and e-commerce, payment card details are provided via channels that need to be evaluated and protected according to PCI DSS.

So far, correct. We agree to this. Exactly like what was mentioned, PCI is in scope. The question here is, will the HQ gets pulled in scope just for transmitting encrypted card data from the POIs?

Let’s look at what causes an environment with card encryption to be in scope (reading further down the paragraph)

a) Systems performing encryption and/or decryption of cardholder data, and systems performing key management functions,

b) Encrypted cardholder data that is not isolated from the encryption and decryption and key management processes,

c) Encrypted cardholder data that is present on a system or media that also contains the decryption key,

d) Encrypted cardholder data that is present in the same environment as the decryption key,

e) Encrypted cardholder data that is accessible to an entity that also has access to the decryption key.

So let’s look at the HQ scope. Does it cover the following 5 criteria for in-scope PCI-DSS dealing with encrypted card data? There is no decryption or encryption process done. The encrypted cardholder data is isolated from the key management processes. The merchant has no access or anything to do with the decryption key.

So now you see the drift. Moving down the paragraph, we find noted that when an entity receives and/or stores only data encrypted by another entity, and where they do not have the ability to decrypt the data, they may be able to consider the encrypted data out of scope if certain conditions are met. This is because responsibility for the data generally remains with the entity, or entities, with the ability to decrypt the data or impact the security of the
encrypted data.

In other words: Encrypted cardholder data (CHD) is out of scope if the entity being assessed for PCI cannot decrypt the encrypted card data.

So now back to the question, if this is so, then why does the merchant still need PCI? Well, because it’s already provisioned above: For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and there may also be paper reports or receipts with cardholder data.

So therefore, stores are always in scope. The question we have here is, if the HQ or any other areas are pulled in scope simply for transmitting encrypted CHD as a passthrough to the acquirer. In many way, this is similar to why PCI considered telco lines as out of scope. They simply provide the highway where all these encrypted messages travel on.

Now, of course, the QSA is right about one thing. They do have the final say, because they can still insist on customers doing the segment PT even if its not needed by the standard. They can impose their own risk-based requirements. They can insist the clients do a full application pentest or ASV over all IPs not related to PCI. They can insist on clients getting a pink elephant to dance in a tutu in order to pass PCI. It’s up to them. But guess what?

It’s also up to the customer to change or have another opinion on this. There are plenty of QSAs about. And once more, not all QSAs are created equal as explored in our article here.  Here we debunk common myths like whether having a local QSA makes any difference or not (it doesn’t), and whether all QSAs interpret PCI the same way (they don’t) and how important independence and conflict of interest should play a role, especially in scoping and working for the best interest of the customer, and not peddling services.

So, if you want to have a go with us, or at least just get an opinion on your PCI scope, drop a message to pcidss@pkfmalaysia.com and we will get back to you and sort out your scoping questions!


PCI-DSS V4.0 Deep Dive 2: Keyed Cryptographic Hashing

As we delve into the intricacies of the PCI-DSS v4.0 standard, it’s crucial to understand the significance of each requirement and its impact on safeguarding sensitive cardholder data. For this article, we’ll be focusing on Requirement 3.5.1.1, which revolves around the use of keyed cryptographic hashing for protecting Primary Account Numbers (PANs).


Requirement 3.5.1.1 states that “Hashes used to render PAN unreadable (per the first bullet of Requirement 3.5.1) are keyed cryptographic hashes of the entire PAN, with associated key-management processes and procedures in accordance with Requirements 3.6 and 3.7.”

Firstly, like everything else from interpreting history to interpreting your wife’s nuanced tone when she says, “It’s fine.”, everything needs a little contextualization. Luckily for us, PCI has placed some explanation for us to jumpstart the discussion.

A hashing function that incorporates a randomly generated secret key to provide brute force attack resistance and secret authentication integrity.
Appropriate keyed cryptographic hashing algorithms include but are not limited to: HMAC, CMAC, and GMAC, with an effective cryptographic strength of at least 128-bits (NIST SP 800-131Ar2).
Refer to the following for more information about HMAC, CMAC, and GMAC, respectively: NIST SP 800-107r1, NIST SP 800-38B, and NIST SP 800-38D).
See NIST SP 800-107 (Revision 1): Recommendation for Applications Using Approved Hash Algorithms §5.3.

Definition of Keyed Cryptographic Hash, PCI Glossary

So that’s a lot of MACs.

Let’s break it down. The requirements has 3 areas of importance, which we have helpfully underlined.

Keyed Cryptographic Hashes

The requirement mandates the use of keyed cryptographic hashes to render PANs unreadable. A keyed hash basically takes the entire PAN and combines it with a secret key to produce a unique, fixed-size output that is practically impossible to reverse-engineer without knowing the key. This way, even if an attacker gains access to the hashed PAN, they won’t be able to derive the original PAN without the secret key.

In 3.2.1, this was not stated and therefore the assumption that a simple hash was sufficient. Let’s listen to what this old obsolete standard says: “It is recommended, but not currently a requirement, that an additional, random input value be added to the cardholder data prior to hashing to reduce the feasibility of an attacker comparing the data against (and deriving the PAN from) tables of precomputed hash values.”

That aged like milk. Basically they are talking about salt. The goal of salting is to protect against dictionary attacks or attacks using a rainbow table. If no secret salt is used, or if the salt has not been sufficiently protected, the corresponding data, for example the PAN, can be read from the attacker’s previously calculated dictionaries or rainbow tables. So in short, salts are good for the world. Except for Salt Bae. He’s no good.

Salting creates slow hashing, which is the point. So that it takes a few billion years for brute force to be successful. How different is salting from keyed hashes? For one, salts are generally known. Sometimes they are even stored together with the hash in the database. So if let’s say, that’s compromised, Salt is known. I suppose, you can say “Live by the salt, die by the salt.” Ha!

Keyed Crypto Hashes mean there is a secret key. And before you go and jump off the building, there are already existing algorithms out there (The MAC brothers) that has previously been used — primarily, to my knowledge — for message integrity checks. In fact, the MAC here means Message Authentication Code to check integrity and authenticity. Unlike the salt, it ISN’T known, or at least not unprotected. So even if the database is compromised, they can’t get the key, because it’s protected (through encryption, later on to explain).

Now, why the change from normal hash, with recommended Salt, to hash with secret key?

The problem is with card numbers. Those dang card numbers, which is so different from let’s say passwords. Unlike passwords, where it could really be random, credit card numbers are NOT random. They are unique, but they are far from random. You see, a credit card consist of:

  1. The bank identification number (BIN) or issuer identification number (IIN)The first six digits is the issuer id. You can go https://www.bindb.com/bin-list
  2. The account number: The number between the BIN and the check digit (the last digit) is six to nine digits long and is used to identify the individual account number.
  3. The check digit: The last digit, added to validate the authenticity of the credit card number. This is by using the Luhn algorithm.

The thing about Luhn is that it is used to validate primary account numbers. I am not going into details, as other people have done so and will do a much better job in explaining this. But the short of it is that, if I have the BIN, and I have the Luhn and I have, let’s say 3 more numbers of the account number, then you get the picture. The Luhn digit is the result of the luhn algorithm applying to all the previous numbers (right first to left), which you already known, if it’s truncated! You would already know 9 digits (first six, last three, the last being the final luhn result). It’s likely still going to take a lot of effort, but the predictable way credit cards are structured actually provides less fields to be guessed. As scary as it may sound, hashes can be possibly reversed. Well, not ‘reversed’ per se but ‘reconstructed’ through guesswork and bruteforce.

While salt adds complexity and make it slower, salts aren’t secret, remember, so eventually that can still be broken. A Key however is secret. Remember the data encryption key, key encryption keys? Well, hashing now requires the same treatment as encryption in that sense that these keys need to be encrypted.

The other important bit of this requirement is the requirement emphasizes that the entire PAN must be hashed. This is important because hashing only a portion of the PAN would still leave some sensitive information exposed. By hashing the entire PAN, we ensure that no part of it remains in plain text, adding an extra layer of protection.

Lastly, the requirement stresses the importance of proper key management processes and procedures, as outlined in Requirements 3.6 and 3.7. This means that the secret keys used for hashing must be securely generated, stored, and managed throughout their lifecycle. Weak key management can undermine the entire purpose of keyed hashing, so it’s crucial to get this right.

What does this mean?

It means, like a lot of new requirements in v4.0: more work.

It is, in its heart, a concept of defense-in-depth. Requirement 3.5.1.1 serves as a secondary line of defense against unauthorized access to stored PANs. Even if an attacker manages to exploit a vulnerability or misconfiguration in an entity’s primary access control system and accesses the database, the keyed cryptographic hashing of PANs acts as an additional barrier, preventing the attacker from obtaining the actual PANs, unless they manage to compromise the key.

By implementing a secondary, independent control system for managing cryptographic keys and decryption processes, entities can ensure that a failure in the primary access control system doesn’t automatically lead to a breach of PAN confidentiality. For instance, instead of storing the PANs in plain text, a website employs a keyed hashing algorithm, such as HMAC-SHA256, to render the PANs unreadable. Each PAN is combined with a unique, randomly generated secret key before being hashed, and the resulting hash values are stored in the website’s database.

Final note: It’s important to note that Requirement 3.5.1.1 applies to all instances of stored PANs, whether in primary storage (databases, flat files) or non-primary storage (backups, audit logs, exception logs). This means that entities must ensure that keyed cryptographic hashing is implemented consistently across all storage locations, leaving no room for gaps in protection.

However, the requirement does make an exception for temporary files containing cleartext PANs during the encryption and decryption process. This is a practical consideration, as it allows entities to temporarily work with unencrypted PANs while performing necessary operations, as long as the temporary files are properly secured and promptly removed after use.

If you have any questions or need assistance in navigating the complexities of PCI-DSS v4.0, don’t hesitate to reach out to us at avantedge@pkfmalaysia.com. Our team of experienced professionals is here to help you every step of the way, ensuring that your organization stays secure, compliant, and ahead of the curve in the ever-evolving landscape of data security.

Why QSAs Matter in your PCI-DSS

The questions we usually get asked, aside from why we prefer not to be a QSA (which, although it is fairly dated and need to be revised, have been answered), despite us doing PCI-DSS since 2012 in Malaysia, is why we hardly work with different QSAs in our PCI-DSS projects. Aren’t all QSAs the same? Aren’t all created equal?

Like everything in life, there are basis of variation. We are not here to say which is better, which is worse. It’s not in our culture to constantly provide a barrage of negative statements in regards to other companies and organizations, even with basis — because that’s not how we are wired.

That being said, we do have an internal list of companies (and QSAs) that we would perhaps have some less inclination to. This is due to either working firsthand with them, or mainly seeing some of the results of their work. Quite shocking some of the things we see. Additionally, we have also had clients who had suffered under their so called advisory and have asked us to step in for help.

So to the query on which QSA should you spend the next six months (or more) months with for your PCI Project? Let’s put a few options forward in a more quantifiable manner.

a) Experience

A question we get asked is why we generally don’t just work with local providers or assessors who are closer to home. It’s not because they are worse or better. It’s like comparing cars. They all have their pros and cons – we do not slag organizations off even if we would rather avoid some of them. But one way I would tell customers is, let’s look at experience first.

As of writing, we have 3065 PCI-DSS listed projects based on the Visa Provier List at https://www.visa.com/splisting/searchGrsp.do. The top 10 assessors on this list is as follows:

AssessorProjects
VikingCloud208
Foregenix142
ControlCase 113
SECTEC103
Compliance Control96
Coalfire Systems86
SISA 83
A-LIGN80
CIPHER 71
atsec 71
Total1053

The top 10 assessors make up almost 35% of the projects listed. Those are heavy hitters. Suffice to say a lot of projects remain unlisted – level 2 Service providers, SAQ projects, Merchant projects etc. So actual projects (included non listed) for each assessor is probably a lot higher. To put in context, there are the following numbers of projects for assessors:

ProjectsNumber of Assessors
154
235
325
422
518
613
713
87
910
103

There are 200 Assessors out there with 10 or less projects listed. In defence, some of these are actually the same company under another name, so it’s not like 100% accurate in terms of this overview. So out of 262 assessors in that list that does PCI, 77% of them have 10 or less projects, showing that it’s not that easy to get that number to a 100 or more. Again I will reiterate, quantity doesn’t automatically means it’s better. Some may argue, the more projects you have, the more quality is suffered. That is a good point. And I have experience with some of the overseas QSAs in that smaller project number group that I would gladly give a project and have a beer with. They are really good and extremely passionate about PCI-DSS and I’ve learnt truckloads from them. We are just saying this is one starting indicator you may want to jump from because most service providers start off with this off the bat when they are presenting their services: how many customers ‘trust’ them.

b) Location

This is slightly misleading in a sense that the query we ask is: do we need a QSA who is local? Local here would mean they have an office in the country they are serving the customer in. This argument, while it seems to initial hold some credence, is actually self defeating. And a bit strange, when most organisations now prefer to be known as regional or global, instead of touting themselves as just local players. If they use this as a plus point, then by going to their overseas customers, they are technically disputing the same argument point they are advocating. Most QSAs won’t use this track because they know that a QSA company needs to at the very least operate regionally, or if you want to be focused on a country – then, fine, take USA. The reason why the service provider list does not have a breakdown of all 195 countries (or if you are a Malaysian Minister, then that would be 500 countries) of Earth is that being a QSA is tough work. The breakdown is in regions and the only countries listed there are US and Canada because US makes up almost 35% of the listed projects there.

Think about the last time you dealt with a QSA. Did you have access to that QSA through messages or call? Did you call for a meeting and that QSA came as required? Did that QSA respond quicker? Was that QSA able to reply your queries, technical or otherwise related to your compliance in clear and consistent manner? Did they insist on you paying them more for advisory or delayed your project? Did they upsell more services to you that was unplanned and unknown? Think about the positives and negative experience you had.

Those are more pertinent queries than deciding someone to be ‘local’. That point is actually really moot. Because in almost all projects, the bulk of the work will be handled by a consultant. QSAs by definition should be global or regional anyway. In the economics of being a QSA (explored in another article), being a QSA operating in a single country would probably not be cost sustainable. Especially in a country where the currency is slightly more than the value of a turnip. So the assessor will still have to be flying to other places anyway. Therefore, it doesn’t really matter whether its local, regional or global when it comes to being an assessor, the question is how accessible and communicable they are.

In that sense, we strike a balance – we are local to Malaysia, or any other country that we operate in (we have presence in 150 countries as a global network), and we provide the independent, technical advisory needed to be consultants. We are not QSAs so we don’t need to be pulled all over the place in other PCI projects all over helter-skelter. We are all certified in various certifications and more product certs that I can throw a stone at. We are operational people all with more than a decade of experience so you won’t have a wide-eyed associate with a checklist coming to you. We also have non-IT services as we are also tax advisors, corporate financiers, risk managers, compliance directors – we aren’t just an IT company aiming to push IT services or cybersecurity solutions for you – our DNA is in advisory and consulting.

Enough of blowing our own horn then. Which leads me to item 3:

c) Reference

It’s important to not just look at a list of customers. I have a client who gets annoyed with seeing a presentation with a list of logos without any context of the work. Some may list down large companies or merchants under their so called ‘Customer’ but without any context. You know what? Fine. I can list down all telcos, up to twenty PLCs, more than half a dozen of oil and gas and more banks than I can swing a bat at just because I have given them ‘training’. Come on.

Look past that veneer and look at actual references in the industry. Is there a positive experience? Is there someone out there willing to endorse good will? Are there any bad experiences? Another area I got asked is, if the assessor has been involved in a breach before. This almost needs a new article to explore. Look, we all know PCI doesn’t guarantee non-breach. It’s not a panacea to world hunger. Its more important to note that what is the outcome of the investigation or forensics before we go witch hunting. It’s meaningless to state for instance, the top QSAs would never experience any breaches in their existence. For sure, some of them would need to deal with this one way or another and to see if indeed there was an oversight. If there’s none, then the breach could be down to myriad of reasons outside of PCI-DSS control. Remember – assessors are not operational. They enter an audit in good faith. Witchunting a QSA just because of a breach involvement without context or having the final conclusion is a narrow minded, irresponsible approach to assessing capability (or culpability). If the QSA is truly to blame, wouldn’t they be put in remediation by the Council? There you go.

One thing you will never catch us doing is giving an opinion about certain things that we don’t have the full context on. It’s simply not something we are comfortable at. If we see some issue with a report from other QSAs, even if it looks strange, the reply is always: what is the context of this, and there must be a reason why it was interpreted as such. So that gives us a more balanced view and not just mouth off without understanding. As the proverbs say: “The more talk, the less truth; the wise measure their words. “

d) Cost and Resources

Most PCI projects have the conflicting pull of cost and resources. A QSA with a lot of resources and consultants will be very useful. The last thing you want to see is a QSA not responding and after 3 months rushes you for evidences. Cost still plays a huge role in PCI-DSS and it’s not as if things are getting cheaper. With version 4.0, there is more work for QSAs to do and they likely will pass down some of these costs to the customers. This still remains a very subjective item in this filtering exercise — a QSA charging your liver and kidney for PCI isn’t ideal, but if a QSA comes in with a price that resembles a popsicle in a flea market, I would likely stay away as well. We all know how much effort PCI is. We don’t want a situation where halfway through, the bulk of invisible costs comes pouring in like the army of Mordor, or else things will not be done. If you want to build your house, have most of the materials cost sorted out. If there is a VO, don’t let it cross a threshold of percentage of your initial cost. Having a QSA who understands this and is willing to negotiate is important. Even if the cost is not lowered (because to be fair, QSA work is not trivial), then negotiate for future services, or better payment terms – anything else to meet in the middle.

e) Stamp of Trust

Are there any stamps of trust for QSAs?

No, there isn’t. At least not officially. However, I would like to highlight there is this thing called Global Executive Assessor Roundtable (GEAR) found here: https://www.pcisecuritystandards.org/about_us/press_releases/pci-security-standards-council-announces-2022-2024-global-executive-assessor-roundtable/

There are 28 QSAs in the GEAR currently, with the purpose below:

The Roundtable is an Executive Committee level advisory board comprised of senior executives from PCI SSC assessor companies. The 2022-2024 GEAR consists of 28 organizations, with the Roundtable term running 1 September 2022 – 31 August 2024.

“The Council depends on the input of a wide range of stakeholders to provide PCI SSC with valuable insights,” said PCI SSC Executive Director Lance J. Johnson.” With the release of version 4.0 of our PCI Data Security Standard this year, it is even more important to have active representation from every corner of the globe from an assessor perspective. Assessors are critical in assisting the Council with our effort to improve and evolve payment data security.”

PCI COUNCIL

The QSA we often work with, Controlcase is one of them, and have been reappointed, pointing out that in terms of reference, the Council considers their input as ‘valuable insights’. This is one of the list we look at, especially when requested about QSAs. Are they involved in GEAR?

IN SUMMARY

Like choosing a car, there is really no guarantees actually that your experience will be immaculate when it comes to PCI-DSS considerations. The above are just possible filters you can decide on when it comes to choosing your next QSA partner to embark your journey on. Or you can roll a dice or consult with the gods. Disclaimer of course is that we have not worked with ALL QSAs yet, so this still remains a rudimentary filter when you are thinking of a QSA. Find a QSA that can actually do the hard yards and have proven themselves with Project references and quality, Global Reach and experience, Positive Customer feedback and respect from the industry and finally, seen as an invaluable assistant to the almighty PCI Council themselves. In our personal opinion, it’s a start to look at these metrics and springboard from there. Because anyone can give a nice presentation or dress in a suit or talk negatively about other companies — but what are their numbers, references and contribution to the PCI council?

Drop us an email at pcidss@pkfmalaysia.com to learn more about PCI and other compliances like ISMS or ITSM or SOC!

« Older posts

© 2025 PKF AvantEdge

Up ↑