Category: IT Compliance (Page 7 of 17)

The Art of Compensating Controls

In our many advisories over the years with companies going through PCI, some of the challenges we face include this little thing called compensating controls.

It’s a PCI term in many sense. Same as ‘segmentation penetration testing’. It’s when you can’t address the actual control in PCI-DSS for a reason and you need to put other controls to address the ‘spirit’ of the control. The spirit means: why did PCI-DSS put that control in, in the first place?

Immediately, this becomes an outlet of some of the greatest creativity ever existed in our technology field. If everyone involved can channel such creativity of controls into developing innovative tech, we would be far ahead achieving the technology vision 2020 that our country is lagging behind in. As they say: Necessity becomes the mother of invention. So everyone starts inventing controls.

Unfortunately such invention of controls in thin air will likely face a bounced rejection from the QSA which would further cause stress due to the impending deadline and the admission that the compliance will be delayed. So it’s in everyone’s interest that compensating controls are done correctly from the beginning.

Now this article’s purpose is not to go through the whole list of what compensating controls are. There are already thousands of articles that do that. Suffice to say, a compensating control is when you cannot meet the PCI requirements and you still want to be compliant.

So to make it simple:

a. Write which control you can’t address. It may be logging and monitoring, complex password etc.

b. Why can’t you do it? Some good reasons are that the legacy system that runs on the factory store churning out a thousand printed statements a minute, that cannot be patched due to patches no longer being produced. A bad reason is, Bob is new to the company and has no clue what does patching means.

c. Risk Analysis. This isn’t something natural many companies do for IT, but it has to be done. If the system cannot be patched what are the risks? Infection of malware? Internal exploit? Availability problem? Loss of power?

d. Document the control. This has to be done. You can’t just go and say, “OK, QSA, trust us, it’s in.” It needs to be detailed procedures on how the organisation will carry out these compensating controls.

e. Ensure it’s implemented and validate it with the QSA. In fact, we would suggest to bring in the QSA early in the process. Since they are the ones validating your controls, it makes sense to onboard them with the fact that you are doing compensating controls. What you may find acceptable to your risk may not be acceptable to the QSA. It’s not a matter of, “Hey, let me bear the risk this year!”. It’s a matter of “If this thing goes south, who else is going to be affected?” – QSA, banks, companies – because credit card information isn’t just the domain of the company handling it – it has upstream and downstream repercussions – from the payment brands, banks, acquirers TPA, service providers to the customers.

To be honest, compensating is a major pain in the butt. There is no way to describe it better. It’s actually worse that the actual controls. Plus its not a given each year – QSA may decide due to next year’s evolving risks your controls are no longer acceptable!

An example here: say you can’t patch your pre-historic system for a good reason. The vendor has since announced they no longer support that system and a new release is imminent in a year’s time and the notice is for all customers to bear down and wait for the release. At this moment it’s completely out of the customer’s hands.

The compensating controls could be

a) Having a documented notice from the vendor that security patching cannot be done

b) Hosting the system in an isolated VLAN that does not have any other CDE/non-cde systems

c) MFA needs for ALL access, not just administrative

d) Firewall rules must be specific to port/source/destination

e) Copy/paste, USB etc are disallowed in said system, and any attempts to do that is logged through DLP.

f) Antimalware must be installed and logs monitored specifically

g) Logging and monitoring needs to be reviewed daily specifically for this system and a report daily separately for incidents to this system

Taking a look at the above examples, these are controls that are considered above and beyond what is necessary for PCI requirements. In short, its a lot harder or more expensive to get compensating controls done than for the actual controls, no matter what you may think.

We once had a conversation with a company who were thinking of switching to us from previous consultants. When asked, we realised that many of their systems were not PCI compliant and they had put in compensating controls. Their compensating controls were: “Mitigation plan is in place to replace these systems in a year’s time.”

That’s not a compensating control. That’s something you plan to do in 365 days time. In the meantime, what are you proposing to lower the risk? That’s your compensating controls.

Don’t use the compensating controls as a get out of jail free card. Any consultant/QSA worth their salt would know how difficult it is to get these controls done and more importantly passed for PCI-DSS. In other words, instead of looking at it as a convenient shortcut or workaround, it should be viewed as the last resort.

Drop us a note at pcidss@pkfmalaysia.com for any queries you may have on PCI-DSS and we will respond to that immediately!

Alienvault USM Anywhere Updates

We just received very good updates from the Alienvault channel team (or AT&T Cybersecurity team as they call themselves now). I think to quickly summarise our excitement into two short phrases:

a) Google Cloud Support – Heck Yeah.

b) Custom Plugin Development – Heck Yeah!

Of course, there were tons of other updates as well, such as scheduled reports, unified UI, more AlienApps support, Cloudflare integration (which is very interesting, as we can identify actions to it, effectively making Alienvault function more like an active prevention system, as opposed to its traditional detective role), new search capability incorporating wildcard searches and advanced asset importing through CSVs as opposed to rudely scanning our clients network.

But the two main courses were the Google Native support and custom plugin.

Google Native support has been a pain point for years. We do have customers moving into GCP or already into GCP where we have been constantly battling to match their expectations for Alienvault to perform as seamlessly as it does on AWS – but it can’t. We had to rely on EDR (endpoint detection and response) for instance, where the agent grabs logs a’la HIDS and sends it over to the server directly. Of course, areas where a native sensor would function, such as creating an internal VPC filter mechanism, or doing vulnerability scanning without having too much inter VPC traffic – these were not able to be done with the EDR so it was very much a bandaid. We knew that our patched up GCP solution wasn’t functioning as well as its handsomer and more dashing brother, AWS. In other words, it kinda sucked.

GCP custom applications also presented its own set of issues – custom apps were difficult to integrate – even with Stackdriver, or us logging to BigQuery, presented a lot of issues to send these logs to Alienvault. When we could configure to send to BigQuery, we couldn’t filter properly, causing our 1TB per month customer quota to be annihilated within days. Now, getting PUB/SUB to work with Alienvault requires APIs to be written, and on top of that to have Alienvault write the custom plugins – all these add to pro services costs, and more importantly, resource and time cost to the project.

So what happens now? In the next General Acceptance/Availability of USM-A, GCP will be supported. The information is sparse so more updates will be forthcoming. But the GCP sensor will be able to:

a) Perform threat detection (like all other sensors), asset discovery, provide Alarms, events, widgets, correlation etc. Basically, it will be native to GCP, doing what it is doing for AWS, Azure and on-prem Hyper and VMWare.

b) Detect VPC flow logs

c) Monitor cloud services through Stackdriver

The last bit is very important. Stackdriver, in essence, is GCP’s answer to Cloudwatch and Cloudtrail of AWS. It monitors and manages services, containers, applications and infrastructure for the cloud. If you have a Cloud services or developing cloud applications, you should be able to support Stackdriver logging. In GCP Compute, the logging agent is used to stream logs from VM Instances. It can even provide the traditional network flow logs (or VPC flow logs), which MSPs can use to monitor network health etc. In other words, this ugly GCP little brother solution is going to get buffed. We’re going to look a lot better now.

The roadmap is bright: Automatic response action against a cloud service when a security event occurs – putting Alienvault into more of a proactive than detective stance it takes traditionally. This is similar to what the Cloudflare integration is achieving. More and more GCP services will be added to be supported. There is also a topic on “User Entity Behaviour Analytics” – which is basically matching behaviour to normal baselines and telling us that Bob is having coffee at 10 am instead of his usual 8 am, which meant he was running late to work, which meant he got stuck in traffic, which meant he left the house late, which meant he woke up late, which meant he slept late last night, which meant he went out for a drink with someone and got smashed, which could possibly mean he is having an affair with a stripper named Daisy. Maybe.

So, pretty exciting times, Aliens!

The other one on the plate wasn’t on the normal discussion agenda but was brought up by us on the international call – we just bombarded the screen with around 10 – 15 queries and at least 4 made it to the table. One of them was: when the hell are we going to get to do our own plugins?

No offence to Alienvault, who currently for USM-A are doing our client’s custom plugins – but 3 – 4 weeks isn’t really going to cut it. Furthermore, sometimes we are not even getting what we want from the custom plugins. We don’t blame Alienvault. The application is ours (as in our client’s). We are the ones who know the events, the priorities. We know what we want to see. We just can’t develop the plugins like what we do now for our USM Appliance clients.

Imagine the win-win situation here. We write plugins for clients (assuming its similar to Appliance), within 2 – 3 days we are done. Testing, another 1 – 2 days. Instead of setting the project timeline back 3 – 4 weeks we are 1 week in. That’s a HUGE impact for compliance clients who are often chasing a deadline. 3 weeks squashed to 1? Hell, Yeah! The win is also for Alienvault. They don’t have to deal with nagging customers or smart-ass channel partners like us banging them for not updating us on our new application plugin. Imagine the parties engineers can now attend to instead of writing regex for a company operating in Elbonia. Imagine the time they now can save and spend socialising with the rest of the world, or having the chance to meet people like Daisy.

It’s a whole new world, really.

So, Alienvault, please, get those updates to us as soon as you can and the world will be a better place for it.

If you need any information on Alienvault, or general help on your SIEM or PCI-DSS compliance, drop us an email on alienvault@pkfmalaysia.com and we will attend to it immediately!

PCI-DSS Full Disk Encryption Part 2

In our previous article we wrote on how Bitlocker can possibly be used as a full disk encryption solution for PCI-DSS.

One of the key things is for the following statement to be complied to:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

By enabling TPM itself doesn’t guarantee that the native authentication is separated from the logical access to the encrypted file system.

The below basically enables TPM with PIN to ensure that there is an additional logical access that is required to comply to PCI-DSS.

So overall, this means that Bitlocker needs an extra authentication when the server restarts. As the policy states, either a passphrase or USB will be required for the startup, and from PCI perspective, this addresses the separate authentication requirement.

Of course the major discussion here is what is compliance and what is practical security?

Because when was the last time you actually restarted your server? The fact is that full disk encryption is only as useful as it is to protect data on the disk when the system is not running. When the server is running, access to the disk remains open and therefore, we see unprotected PANs with their pants dropped (so to speak).

We are not saying that bitlocker cannot comply to 3.4.1 of PCI. We are saying probably PCI might be better off relooking at this 3.4.1 and clarify the ‘spirit’ of this requirement. At the end, we are concerned with loss of PAN. We are concerned with the fact that files may be taken away, siphoned away through a variety of means – either through the network, or USB, or photos on your phone etc.

The problem with Full Disk Encryption is that even if we do have separate authentication to boot up into the server, once it’s booted and once authenticated separately, the full disk encryption no longer does the job of ‘rendering PANs unreadable where they are stored’. The argument thus comes about that once that occurs, then whoever is reading those PANs are authorised users already with business requirements to view those PANs.

In our opinion, there needs to be much more security surrounding these servers with PANs that use full disk encryption. Access must be limited again to only those with business justification, and not be used for multiple purposes and especially not for non-PCI usage. Logical access, hardening, logging and monitoring obviously needs to be in place. Protection of the PIN must be in place, and changes of PINs based on PCI-DSS expiry policies.

The comfort level of FDE vs say, file encryption or even folder encryption is much less. Whether it meets 3.4.1, if done properly, it clearly does. But is it truly secure? Therein lies that discrepancy between compliance and security. It ticks the checkbox (for now, unless PCI alters it in 4.0), but from a security standpoint, there is a lot of risk surrounding it.

If you use FDE, don’t expect your QSA to just take it lying down. Most likely further queries will be made and some may deem it even insufficient in itself to address the risks of PAN being compromised and may request additional controls on top of it.

If you have further queries on FDE or any compliance programs like PCI, ISO etc, drop us an email at avantedge@pkfmalaysia.com and we will attend to it immediately!

PCI-DSS Full Disk Encryption Part 1

In PCI-DSS, one of the most difficult requirement to get through would be Requirement 3, that deals with stored credit card information and how to protect it. Aside from Requirement 10: Logging and Requirement 6: Software, Requirement 3: Storage makes up a bulk of the remediation effort and cost of PCI-DSS.

The excerpt ominously states at the beginning: Protection methods such as encryption, truncation, masking, and hashing are critical components of cardholder data protection. If an intruder circumvents other security controls and gains access to encrypted data, without the proper cryptographic keys, the data is unreadable and unusable to that person. Other effective methods of protecting stored data should also be considered as potential risk mitigation opportunities. For example, methods for minimizing risk include not storing cardholder data unless absolutely necessary, truncating cardholder data if full PAN is not needed, and not sending unprotected PANs using end-user messaging technologies, such as e-mail and instant messaging.

It goes without saying that if you have credit card information on file for whatever reason, it would be a good time to relook at the necessity of it. If you don’t need it, get rid of it, because the cost of maintenance and remediation may not be worth whatever value you think you are obtaining from storage of card data.

If you do need it, well, PCI provides a few options for you to protect it: Encryption, Truncation, Masking and Hashing. In this series of articles we will be looking into encryption and more specifically Full Disk Encryption.

Encryption itself deserves a long drawn out discussion and the types of encryption – you have applications doing encryption through the application library, you have database encryption like TDE, you have file encryption or folder encryption, you have full disk encryption. One part is the encryption methodology. The other part of it is the encryption key management. The latter is the one that usually throws up a headache.

We will be exploring Full Disk Encryption or FDE, and where it can be implemented to comply to PCI-DSS.

There is a specific part in 3.4.1 stating:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

So aside from the encryption being strong encryption and key management being done properly, PCI says, there are a few more things to be aware of for full disk encryption:

a) Logical access must be separate and independent of the native OS authentication

b) Decryption key must not be associated with the user account.

What does this mean?

Let’s look at Bitlocker for now, since that’s everyone’s favourite example.

Bitlocker has gone through a lot of stick probably because it’s a native Microsoft offering. Maybe. I don’t know. The fact is Bitlocker is able to use 128 or 256 bit AES so basically, in terms of strong cryptography, it’s possible. It’s the key management that’s the issue.

For key management, the recommended usage with Bitlocker is to use the Trusted Platform Module version 1.2 or later. The TPM is a hardware in your server that somewhat acts like a key vault or key management module, to simplify it. It offers system verification to ensure there is no tampering of the system at startup. Beginning with Windows 10, version 1803, you can check TPM status in Windows Defender Security Center > Device Security > Security processor details. In previous versions of Windows, open the TPM MMC console (tpm.msc) and look under the Status heading.

Bitlocker can also be used without TPM, although that means the system integrity checks are bypassed. It can operate along with Active Directory, although the newer versions of bitlocker doesn’t store the password hash in AD anymore by default. Instead a recovery password can be stored in the AD if required.

With the TPM, it’s still not the end of it, because we need to make sure that there is a separation of authentication for bitlocker to operate. In this case we will look to configure it with a PIN (which essentially is a password that you know).

First of all, let’s see what at the end we should be seeing.

So at the end you are basically seeing both file systems being encrypted. I’ve been asked before if all volumes need to be encrypted, and the answer is no, because bitlocker can’t do that anyway. Your system drive can’t be encrypted. So for PCI, it makes sense NOT to store card data in drives that are not encrypted.

The next thing we need to check is to ensure your set up has fulfilled the strong encryption requirement of PCI-DSS:

So you have a few things to ensure that strong crypto is enabled and key protectors are in place. So what you have is bitlocker now enabled. You also basically need to ensure you properly document the key management policy – include in AES256 or 128 that you are using, which drives are protected, key expiry date.

Keep in mind also the following:

FVEK (Full Volume Encryption Key) as DEK and VMK (Volume Master Key) as KEK.

FVEK stores in Boot sector (Volume meta data) in hard disk and VMK stores in TPM chip PCR register (it’s a Hardware chip which place in Motherboard).

In general, the above would fulfill PCI requirements. In our next article, we will write out on how logical access to the encrypted file system can be separated from the native OS authentication mechanism.

Meantime, please drop us any enquiries at pcidss@pkfmalaysia.com if you need to know more about PCI-DSS or any compliance matters in IT. We are here to help!

PCI-DSS: Internal Audit Signoffs

After going through previously the nightmare of PCI-DSS Certificates, as described with considerable detail in our writeup previously, we are now faced with a new phenomenon called the Internal Audit Signoff, which is further confusing our clients.

OK, first of all, let’s do a brief recap.

How are 3 ways that PCI-DSS can be validated?

Answer :

  1. Full Report of Compliance (RoC) from QSA – Level 1 Service Providers, Level 1 Merchants
  2. Self Assessment Questionnaire (SAQ) signed off by QSA/ISA – Level 2 Merchants, (Maybe) Level 2 Service Providers
  3. Self Assessment Questionnaire (SAQ) signed off only by Merchant/Service provider – Level 3,4 Merchant, (Maybe) Level 2 Service providers

Those are the 3 endgames for PCI. And of course, the end scenario called Failure, or non-Compliance. But that isn’t cool, unless you are the type who is happy with Thanos snapping his fingers being the definite end to all things.

Now we all know item 1) requires the participation of a third party QSA/ISA to signoff on the Report of Compliance and the Attestation of Compliance. ISA here is internal security auditor. We won’t touch it this round, because this requires a whole new library of articles to discuss.

Item 2) likewise requires a third party QSA/ISA to signoff on the Self Assessment Questionnaire and the Attestation of Compliance.

Item 3) is basically, self signed – not a lot of acquirers take this seriously as basically, its anyone signing off anything they feel like. There is no validation, and sometimes, it’s akin to the CxO sticking a finger to the tongue and putting it up in the air and going, “Yeah, that feels ’bout right. Let’s sign off and say we have these controls!”

Let’s talk about item 1 and item 2.

In item 1, it’s a gimme that the QSA needs to go onsite to the locations to do an audit. I have never heard of any QSA signing off on a full RoC without actually going onsite. Maybe when our tech reaches a point where the QSA can be holographically present in a location and see what’s there without being physically there like a Jedi Force Ghost, that the PCI-SSC would accept the signoff. But by then, we could probably just tell PCI-SSC that these aren’t the companies they are looking for, and then there’s no need to do PCI.

Until then – the question is for item 2, for the QSA to signoff the SAQ, must they be onsite or they can provide a remote signoff?

Now if you ask a QSA what is the difference between 1) and 2), they would say, not much – except they don’t have to waste their time writing the tome called the Report of Compliance (ROC) for level 2. Level 2 is basically a judgement made by the QSA based on existing evidences that what is stated in the SAQ is true, or at least as much as they can have reasonable assurance on. The SAQ is not a document written by the QSA, although they can help, but in this case they are validating it. For Level 1, it’s a different story. They have to write the RoC and the work put into that reporting phase is surprisingly a lot. In comparison, it’s probably like reviewing a first term essay paper written by your senior students (SAQ Validation) versus writing the Silmarillion including the index (RoC).

However, for QSAs to conduct their audit and provide a fair opinion on the controls, they will still want to be onsite for option 2), much to the chagrin of many of my customers. Their argument here is: “Hey I am level 2, why must you come onsite??” And again, the crescendo grows that a Level 2 should have less things to worry about than Level 1 – another myth as old as us telling our children not to sleep with wet hair or else they will wake up with a storming headache.

To get to the bottom of this, we got directly from the horse’s mouth (in this case from Mastercard SDP program response: “In this scenario (describing item 2) the QSA has to be onsite. The QSA cannot simply review a RoC or SAQ without being at the location to validate that controls are actually in place.”

To be fair, the above discussion was applied to L2 Merchants (Level 2 Merchants) – those making more than 1 million volume card transactions per annum. Whether the QSA is willing to take the risk and perform an offsite review for a Level 3 or level 4, I wouldn’t know – that’s up to the QSA and the card brands I suppose. But to be absolutely safe, we would advice that all levels should be treated as such – if you need a QSA to signoff, that QSA needs to be onsite to get it done. Or use the Jedi Force Ghost. Both are acceptable to PCI-SSC I am pretty sure.

So, as an illustration, we had a request from a company, requesting us, for their location, to get the QSA to signoff remotely. Because “The Other QSA did it for us and certified us”. The other QSA meaning someone they engaged earlier.

OK – this certification term again. I am sure that did not happen – but many use the word certification for anything: actual RoC, doing the SAQ with QSA, signoff on SAQ by themselves, getting ASV scan etc…those are typical scenarios we see this certification word being thrown.

Digging further, we received a worksheet which was a typical ‘Scope’ document (you know, where they ask what sort of merchant you are, what business, how many locations, devices, whether you store card etc), and the instruction was to fill this up, send it over to the QSA and the QSA will ‘sign off’ their PCI-DSS compliance, all within 2 weeks.

QSA certified within 2 weeks, remotely, and with just the scope document, without validating any controls? No penetration testing or ASV? No Risk assessment? No review of information security policy? How?

We asked for the copy of the official signoff page (Section 3c of the AoC) but instead we got a signoff on a report from QSA stating what was scoped in and what was scoped out of PCI-DSS. A typical scope document. It’s a useful document, but it’s not a document required by the PCI SSC. In fact it doesn’t serve any purpose other than to simply state what is in scope for PCI-DSS based on the scope questionnaire (not the SAQ) provided by the QSA.

I am 100% sure the QSA meant well by this, but the problem was, there are interpretation issues. We cannot expect clients to right off the bat understand PCI-DSS and all it’s seemingly malarkey documents – the AoC, the RoC, the 9 different SAQs, the ASV scans, the partridge in the pear tree etc. So when we asked for a SAQ signed off by QSA, of course, clients will fall back to any document being signed off by QSAs. That’s why we are not big fans of the practice where clients are provided by ASV certificates just because they passed their ASV scans. They all think they are PCI certified because they have a QSA signed off document which is the ASV ‘certificate’! And the same here goes, this is simply a scope review document – almost like an internal audit report, that does not make a company PCI compliant. In fact, it is just confirming that the company MUST be PCI compliant according to the scope set.

So the moral of this story is: Not all QSA-signed off documents are valid documents for PCI-DSS. ASV scans, while valid, doesn’t make you PCI compliant. It’s only a small percentage of what you must do. Internal Audits or scope reviews like the one we saw, even signed off by the QSA, are not valid PCI-DSS documents. They do not make you PCI compliant. As PCI has explicitly stated before, the only valid PCI-SSC documentation are the AoC, SAQ, RoC and ASV scan reports (not certificates, with flowery borders and impressive cursive fonts in gold). Anything else are supplementary materials used to support the compliance, not to validate it.

For more clarity on PCI, drop us an email at pcidss@pkfmalaysia.com. We will try to sort any issues you have, and yes, we are the company you are looking for.

« Older posts Newer posts »

© 2025 PKF AvantEdge

Up ↑