Author: pkfavantedge (Page 11 of 37)

The Art of Compensating Controls

In our many advisories over the years with companies going through PCI, some of the challenges we face include this little thing called compensating controls.

It’s a PCI term in many sense. Same as ‘segmentation penetration testing’. It’s when you can’t address the actual control in PCI-DSS for a reason and you need to put other controls to address the ‘spirit’ of the control. The spirit means: why did PCI-DSS put that control in, in the first place?

Immediately, this becomes an outlet of some of the greatest creativity ever existed in our technology field. If everyone involved can channel such creativity of controls into developing innovative tech, we would be far ahead achieving the technology vision 2020 that our country is lagging behind in. As they say: Necessity becomes the mother of invention. So everyone starts inventing controls.

Unfortunately such invention of controls in thin air will likely face a bounced rejection from the QSA which would further cause stress due to the impending deadline and the admission that the compliance will be delayed. So it’s in everyone’s interest that compensating controls are done correctly from the beginning.

Now this article’s purpose is not to go through the whole list of what compensating controls are. There are already thousands of articles that do that. Suffice to say, a compensating control is when you cannot meet the PCI requirements and you still want to be compliant.

So to make it simple:

a. Write which control you can’t address. It may be logging and monitoring, complex password etc.

b. Why can’t you do it? Some good reasons are that the legacy system that runs on the factory store churning out a thousand printed statements a minute, that cannot be patched due to patches no longer being produced. A bad reason is, Bob is new to the company and has no clue what does patching means.

c. Risk Analysis. This isn’t something natural many companies do for IT, but it has to be done. If the system cannot be patched what are the risks? Infection of malware? Internal exploit? Availability problem? Loss of power?

d. Document the control. This has to be done. You can’t just go and say, “OK, QSA, trust us, it’s in.” It needs to be detailed procedures on how the organisation will carry out these compensating controls.

e. Ensure it’s implemented and validate it with the QSA. In fact, we would suggest to bring in the QSA early in the process. Since they are the ones validating your controls, it makes sense to onboard them with the fact that you are doing compensating controls. What you may find acceptable to your risk may not be acceptable to the QSA. It’s not a matter of, “Hey, let me bear the risk this year!”. It’s a matter of “If this thing goes south, who else is going to be affected?” – QSA, banks, companies – because credit card information isn’t just the domain of the company handling it – it has upstream and downstream repercussions – from the payment brands, banks, acquirers TPA, service providers to the customers.

To be honest, compensating is a major pain in the butt. There is no way to describe it better. It’s actually worse that the actual controls. Plus its not a given each year – QSA may decide due to next year’s evolving risks your controls are no longer acceptable!

An example here: say you can’t patch your pre-historic system for a good reason. The vendor has since announced they no longer support that system and a new release is imminent in a year’s time and the notice is for all customers to bear down and wait for the release. At this moment it’s completely out of the customer’s hands.

The compensating controls could be

a) Having a documented notice from the vendor that security patching cannot be done

b) Hosting the system in an isolated VLAN that does not have any other CDE/non-cde systems

c) MFA needs for ALL access, not just administrative

d) Firewall rules must be specific to port/source/destination

e) Copy/paste, USB etc are disallowed in said system, and any attempts to do that is logged through DLP.

f) Antimalware must be installed and logs monitored specifically

g) Logging and monitoring needs to be reviewed daily specifically for this system and a report daily separately for incidents to this system

Taking a look at the above examples, these are controls that are considered above and beyond what is necessary for PCI requirements. In short, its a lot harder or more expensive to get compensating controls done than for the actual controls, no matter what you may think.

We once had a conversation with a company who were thinking of switching to us from previous consultants. When asked, we realised that many of their systems were not PCI compliant and they had put in compensating controls. Their compensating controls were: “Mitigation plan is in place to replace these systems in a year’s time.”

That’s not a compensating control. That’s something you plan to do in 365 days time. In the meantime, what are you proposing to lower the risk? That’s your compensating controls.

Don’t use the compensating controls as a get out of jail free card. Any consultant/QSA worth their salt would know how difficult it is to get these controls done and more importantly passed for PCI-DSS. In other words, instead of looking at it as a convenient shortcut or workaround, it should be viewed as the last resort.

Drop us a note at pcidss@pkfmalaysia.com for any queries you may have on PCI-DSS and we will respond to that immediately!

PCI-DSS For Software Developers

Of late we have been receiving numerous calls from software developers requesting us how on earth do they become PCI-DSS certified.

It’s never easy to explain over the phone, especially with misconceptions that PCI-DSS is a license, or a software, or a solution, or some sort of exam or some other thing. And also, how do we go about explaining to them that technically they don’t (or can’t) be PCI certified as a software vendor, but they can opt for PA-DSS or the new Secure Software Standard from PCI.

So the first thing to ask is (assuming this application/solution is handling credit card information):

a) Are you developing software only and selling that software to your customers?

b) Are you developing a solution where you are hosting and managing and allowing clients?

If it’s a), applicability of PCI-DSS is simply on your customer that is buying your software, not on you as a company. After all, you generally don’t handle credit card – your customer does. However, your software is likely in scope for their PCI-DSS assessment, so there could be an instance where you need to participate in your client’s assessment or to develop your software in a manner where it would be “PCI Compliant”. Developing a PCI compliant software doesn’t make it certified, but it does assist in helping your clients getting certified. An example would be to develop your solution with logging capability and able to log to a central location. Another example is your solution being able to integrate with AD, or to have PCI compliant password policies (session timeouts, password expiry etc). Other examples are to ensure there is Role Based Authentication and Authorisation. Or ensuring encryption is properly done for data at rest and in transit. By doing these doesn’t make it immediately PCI certifiable – but it does provide your client with less headache.

If it’s b), then yes, you are not considered just a software developer but a service provider. You are providing SAAS, so generally that makes you responsible for the day to day security of card data in behalf of your client. In that case, PCI-DSS is able to be applied to you on your solution and your process.

As with PA-DSS, the new Secure Software Program applies to the following software:

Software products involved in or directly supporting or facilitating payment transactions that store, process, or transmit clear-text account data.

Software products developed by the vendor that are commercially available for sale to multiple organizations.

So all the CRM systems, call systems, in house systems, customised systems are all not eligible for PA-DSS or the new program. This is typically in line with what has always been, anyway.

So that leaves us back to square one. What happens if you are not eligible for PA-DSS or Secure Software program and you are just a software developer and NOT a service provider, but your client is insisting on you being PCI-DSS certified?

Well, hopefully you can explain to them or point them out to this article. Another option you can have is to say you have developed your software that is compliant to PCI requirements. The following list shows what it should take to address PCI compliance (not comprehensive):

1.      Requirement 2 – Ensure no clear text for administrative access

2.      Requirement 3 – Application is transmitting /store and strong encryption needed

3.      Requirement 4 – Application must encrypt when transmitting over public network

4.      Requirement 6 – Software development process – secure code review, remove test data before rolling to production,  ensure application is patched, prompt when bugs are discovered.

5.      Requirement 8 – ensure the application can support PCI DSS password requirements, password is encrypted at rest and transmission

6.      Requirement 10 – the application is capable of sending logs to the SIEM, Application penetration testing is conducted and documented what methodology of testing is used.

Requirements affecting Software: Sample Evidences
For all system components in scope (servers, network devices, applications, databases, etc.) and POS devices, provide evidence of strong cryptography being implemented (ssh, TLS 1.2 or later, RDP over TLS etc.)
Provide the following for all filesystems, databases and any backup media
– Details on method (encryption, hashing, truncation, tokenization) being used to protect covered information in storage
– Evidence (screenshots or settings) showing  covered information is protected
Provide evidence of encryption being used for transmission of in-scope data over any open or public communication channel (i.e. Internet, Wireless network, GSM, GPRS, VSAT technology etc.). Encryption must confirm to strong industry standards.
For the selected sample, provide evidence of,
– Current patch levels
– Patches being deployed in a timely manner
Provide secure software development process document in accordance with industry best practices
Provide a recent secure code review report for an application that stores, processes or transmits covered information.
Provide a document that outlines
– the process for generating test data to be used in lower (test/development) environments.
– the process for removing test data and test accounts prior to moving the system to higher (production) environment.
Provide 4 sample change request (2 for software modification and 2 for security patch implementation) from the last 6 months.
Provide the following from a secure code training perspective
– Material used for training
– Attendee list showing that all developers are covered
Provide evidence of logical access account and password features to include,
– Account lockout policy
– Account lockout duration
– Session timeout policy
– Password length
– Password complexity
– Password history
– Password expiry
Provide evidence that passwords (for platform and/or consumer applications) are encrypted during transmission and storage.
Provide the audit log policy settings.
Provide actual event logs for each of the platforms identified in the sample.
Provide a documented methodology being used for penetration testing.
Provide internal penetration test report.

You would get stuck if your clients want to see the PCI-DSS certification, which obviously you won’t have. In this case, the only way forward is to talk to them saying it’s not possible for you to be PCI certified in that sense. If you want, you could actually engage a third party auditor or even a QSA to assess the application based on PCI requirements. You won’t get a certificate for PCI, but at least you have a third party attestation or report, which hopefully should be enough.

Another option is to just get a hold of us at pcidss@pkfmalaysia.com and we can maybe provide a bit more persuasion to your client in accepting your application for PCI-DSS!

Alienvault USM Anywhere Updates

We just received very good updates from the Alienvault channel team (or AT&T Cybersecurity team as they call themselves now). I think to quickly summarise our excitement into two short phrases:

a) Google Cloud Support – Heck Yeah.

b) Custom Plugin Development – Heck Yeah!

Of course, there were tons of other updates as well, such as scheduled reports, unified UI, more AlienApps support, Cloudflare integration (which is very interesting, as we can identify actions to it, effectively making Alienvault function more like an active prevention system, as opposed to its traditional detective role), new search capability incorporating wildcard searches and advanced asset importing through CSVs as opposed to rudely scanning our clients network.

But the two main courses were the Google Native support and custom plugin.

Google Native support has been a pain point for years. We do have customers moving into GCP or already into GCP where we have been constantly battling to match their expectations for Alienvault to perform as seamlessly as it does on AWS – but it can’t. We had to rely on EDR (endpoint detection and response) for instance, where the agent grabs logs a’la HIDS and sends it over to the server directly. Of course, areas where a native sensor would function, such as creating an internal VPC filter mechanism, or doing vulnerability scanning without having too much inter VPC traffic – these were not able to be done with the EDR so it was very much a bandaid. We knew that our patched up GCP solution wasn’t functioning as well as its handsomer and more dashing brother, AWS. In other words, it kinda sucked.

GCP custom applications also presented its own set of issues – custom apps were difficult to integrate – even with Stackdriver, or us logging to BigQuery, presented a lot of issues to send these logs to Alienvault. When we could configure to send to BigQuery, we couldn’t filter properly, causing our 1TB per month customer quota to be annihilated within days. Now, getting PUB/SUB to work with Alienvault requires APIs to be written, and on top of that to have Alienvault write the custom plugins – all these add to pro services costs, and more importantly, resource and time cost to the project.

So what happens now? In the next General Acceptance/Availability of USM-A, GCP will be supported. The information is sparse so more updates will be forthcoming. But the GCP sensor will be able to:

a) Perform threat detection (like all other sensors), asset discovery, provide Alarms, events, widgets, correlation etc. Basically, it will be native to GCP, doing what it is doing for AWS, Azure and on-prem Hyper and VMWare.

b) Detect VPC flow logs

c) Monitor cloud services through Stackdriver

The last bit is very important. Stackdriver, in essence, is GCP’s answer to Cloudwatch and Cloudtrail of AWS. It monitors and manages services, containers, applications and infrastructure for the cloud. If you have a Cloud services or developing cloud applications, you should be able to support Stackdriver logging. In GCP Compute, the logging agent is used to stream logs from VM Instances. It can even provide the traditional network flow logs (or VPC flow logs), which MSPs can use to monitor network health etc. In other words, this ugly GCP little brother solution is going to get buffed. We’re going to look a lot better now.

The roadmap is bright: Automatic response action against a cloud service when a security event occurs – putting Alienvault into more of a proactive than detective stance it takes traditionally. This is similar to what the Cloudflare integration is achieving. More and more GCP services will be added to be supported. There is also a topic on “User Entity Behaviour Analytics” – which is basically matching behaviour to normal baselines and telling us that Bob is having coffee at 10 am instead of his usual 8 am, which meant he was running late to work, which meant he got stuck in traffic, which meant he left the house late, which meant he woke up late, which meant he slept late last night, which meant he went out for a drink with someone and got smashed, which could possibly mean he is having an affair with a stripper named Daisy. Maybe.

So, pretty exciting times, Aliens!

The other one on the plate wasn’t on the normal discussion agenda but was brought up by us on the international call – we just bombarded the screen with around 10 – 15 queries and at least 4 made it to the table. One of them was: when the hell are we going to get to do our own plugins?

No offence to Alienvault, who currently for USM-A are doing our client’s custom plugins – but 3 – 4 weeks isn’t really going to cut it. Furthermore, sometimes we are not even getting what we want from the custom plugins. We don’t blame Alienvault. The application is ours (as in our client’s). We are the ones who know the events, the priorities. We know what we want to see. We just can’t develop the plugins like what we do now for our USM Appliance clients.

Imagine the win-win situation here. We write plugins for clients (assuming its similar to Appliance), within 2 – 3 days we are done. Testing, another 1 – 2 days. Instead of setting the project timeline back 3 – 4 weeks we are 1 week in. That’s a HUGE impact for compliance clients who are often chasing a deadline. 3 weeks squashed to 1? Hell, Yeah! The win is also for Alienvault. They don’t have to deal with nagging customers or smart-ass channel partners like us banging them for not updating us on our new application plugin. Imagine the parties engineers can now attend to instead of writing regex for a company operating in Elbonia. Imagine the time they now can save and spend socialising with the rest of the world, or having the chance to meet people like Daisy.

It’s a whole new world, really.

So, Alienvault, please, get those updates to us as soon as you can and the world will be a better place for it.

If you need any information on Alienvault, or general help on your SIEM or PCI-DSS compliance, drop us an email on alienvault@pkfmalaysia.com and we will attend to it immediately!

Alienvault USM – Flat File Log Capture – Part 1

We’ve been working with and on Alienvault since the beginning of 2016 and a lot has changed since then. When we started out with Alienvault, they were a small-ish company still, with big ambitions, working with a very technical group out of Cork, Ireland. We had direct access to their technical team (I think even to one engineer) and the amount of knowledge we got from those early days are pretty much invaluable to where we are right now. Of course, Alienvault has changed a lot since then, and now being part of AT&T – for the record, we believe they have the right roadmap to go into cloud with their USM Anywhere concept, and their product right now is much more robust and enterprise ready. They are on the right trajectory.

However, back in the days, for Alienvault USM Appliance (not Anywhere), which is their Appliance offering, we could literally ‘jailbreak’ the system and go into the underlying OS and do stuff to Alienvault that we can’t do anymore in Anywhere. Some of the changes we made were to increase optimisation, put in our own scripts to clean up the system, troubleshoot the system and of course, create plugins for custom applications. We would write custom plugins in 1 – 2 days for multiple applications because of deadlines, I remember and had to do so much in so little time – but we did it anyway. We had to write a plugin for one of the oldest mainframes for a financial institution that was so difficult to interpret, we had to dig up old manuals to sort out the entries for log and events. It was like we were interpreting Egyptian hieroglyphs. But that’s what it took – 2 days, I think because of compliance requirements and customer breathing down our neck to get it done.

Writing plugins was the easier part of the battle – in some old machines or legacy applications, getting the logs was the problem. If Alienvault doesn’t get the logs, it can’t do anything with it. One solution was to leverage on the HIDs (Host IDS), or OSSEC as it was known, to grab log files from systems. It wasn’t so elegant, and we still had to end up writing plugins for it to normalise, but it resolve the issue where application was not able to forward logs to the SIEM, or not able to write the logs to the Windows Event service, or any other way to get logs out to a syslogger. So the solution here is for the application to just write the logs to a file, and Alienvault go ahead and grab this and interpret it. It may not be real time, but it works.

There’s a good write up over in Alienvault at
https://www.alienvault.com/documentation/usm-appliance/ids-configuration/process-reading-log-file-with-hids-agent-windows.htm. So a lot of it is just a repeat and probably an exposition on why we are doing certain things in a certain way.

So the first thing to do here is to ensure that you are able to install HIDs on the server. HIDs will be the key to get this file out to Alienvault. Technically, you could actually use NXLog as well but let’s explore that another time.

Once HIDs is installed, get into the ossec configuration file to define the <localfile> location. Now assuming that you have configured your application to write to a flat file called database.log.txt.

Go ahead and restart OSSEC. That’s pretty much what you need to do to start off so it’s pretty simple.

The rest of it is all done on Alienvault.

To summarise the steps:

Enable “logall” on your USM Appliance. You want to dump whatever you are getting in that flat file database log to a log inside your Alienvault so you can start doing stuff to it. In this case, in your AV User Interface:

Environment > Detection > HIDS > Config > Configuration.

Add <logall>yes</logall> to the <global> section of the file .

You are dumping these logs into /var/ossec/logs/archives/archives.log.

Restart the HIDs service through UI.

You should be able to see new logs coming into archives.log. Just do a tail -f on it, edit the log file (database.log.txt) in your remote system (just write something on it) and see if it appears in your archives.log. Once you see it, you are almost done. Very simple.

So for now, you have customised logs coming into your Alienvault. The next thing to do is to interpret these logs and make sure events are able to be derived from these logs to something that is useful to you!

Drop us an email at alienvault@pkfmalaysia.com for more information on Alienvault or any technical queries you have, and we will attend to it.

PCI-DSS Full Disk Encryption Part 2

In our previous article we wrote on how Bitlocker can possibly be used as a full disk encryption solution for PCI-DSS.

One of the key things is for the following statement to be complied to:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

By enabling TPM itself doesn’t guarantee that the native authentication is separated from the logical access to the encrypted file system.

The below basically enables TPM with PIN to ensure that there is an additional logical access that is required to comply to PCI-DSS.

So overall, this means that Bitlocker needs an extra authentication when the server restarts. As the policy states, either a passphrase or USB will be required for the startup, and from PCI perspective, this addresses the separate authentication requirement.

Of course the major discussion here is what is compliance and what is practical security?

Because when was the last time you actually restarted your server? The fact is that full disk encryption is only as useful as it is to protect data on the disk when the system is not running. When the server is running, access to the disk remains open and therefore, we see unprotected PANs with their pants dropped (so to speak).

We are not saying that bitlocker cannot comply to 3.4.1 of PCI. We are saying probably PCI might be better off relooking at this 3.4.1 and clarify the ‘spirit’ of this requirement. At the end, we are concerned with loss of PAN. We are concerned with the fact that files may be taken away, siphoned away through a variety of means – either through the network, or USB, or photos on your phone etc.

The problem with Full Disk Encryption is that even if we do have separate authentication to boot up into the server, once it’s booted and once authenticated separately, the full disk encryption no longer does the job of ‘rendering PANs unreadable where they are stored’. The argument thus comes about that once that occurs, then whoever is reading those PANs are authorised users already with business requirements to view those PANs.

In our opinion, there needs to be much more security surrounding these servers with PANs that use full disk encryption. Access must be limited again to only those with business justification, and not be used for multiple purposes and especially not for non-PCI usage. Logical access, hardening, logging and monitoring obviously needs to be in place. Protection of the PIN must be in place, and changes of PINs based on PCI-DSS expiry policies.

The comfort level of FDE vs say, file encryption or even folder encryption is much less. Whether it meets 3.4.1, if done properly, it clearly does. But is it truly secure? Therein lies that discrepancy between compliance and security. It ticks the checkbox (for now, unless PCI alters it in 4.0), but from a security standpoint, there is a lot of risk surrounding it.

If you use FDE, don’t expect your QSA to just take it lying down. Most likely further queries will be made and some may deem it even insufficient in itself to address the risks of PAN being compromised and may request additional controls on top of it.

If you have further queries on FDE or any compliance programs like PCI, ISO etc, drop us an email at avantedge@pkfmalaysia.com and we will attend to it immediately!

« Older posts Newer posts »

© 2025 PKF AvantEdge

Up ↑