Page 11 of 40

PCI-DSS For Software Developers

Of late we have been receiving numerous calls from software developers requesting us how on earth do they become PCI-DSS certified.

It’s never easy to explain over the phone, especially with misconceptions that PCI-DSS is a license, or a software, or a solution, or some sort of exam or some other thing. And also, how do we go about explaining to them that technically they don’t (or can’t) be PCI certified as a software vendor, but they can opt for PA-DSS or the new Secure Software Standard from PCI.

So the first thing to ask is (assuming this application/solution is handling credit card information):

a) Are you developing software only and selling that software to your customers?

b) Are you developing a solution where you are hosting and managing and allowing clients?

If it’s a), applicability of PCI-DSS is simply on your customer that is buying your software, not on you as a company. After all, you generally don’t handle credit card – your customer does. However, your software is likely in scope for their PCI-DSS assessment, so there could be an instance where you need to participate in your client’s assessment or to develop your software in a manner where it would be “PCI Compliant”. Developing a PCI compliant software doesn’t make it certified, but it does assist in helping your clients getting certified. An example would be to develop your solution with logging capability and able to log to a central location. Another example is your solution being able to integrate with AD, or to have PCI compliant password policies (session timeouts, password expiry etc). Other examples are to ensure there is Role Based Authentication and Authorisation. Or ensuring encryption is properly done for data at rest and in transit. By doing these doesn’t make it immediately PCI certifiable – but it does provide your client with less headache.

If it’s b), then yes, you are not considered just a software developer but a service provider. You are providing SAAS, so generally that makes you responsible for the day to day security of card data in behalf of your client. In that case, PCI-DSS is able to be applied to you on your solution and your process.

As with PA-DSS, the new Secure Software Program applies to the following software:

Software products involved in or directly supporting or facilitating payment transactions that store, process, or transmit clear-text account data.

Software products developed by the vendor that are commercially available for sale to multiple organizations.

So all the CRM systems, call systems, in house systems, customised systems are all not eligible for PA-DSS or the new program. This is typically in line with what has always been, anyway.

So that leaves us back to square one. What happens if you are not eligible for PA-DSS or Secure Software program and you are just a software developer and NOT a service provider, but your client is insisting on you being PCI-DSS certified?

Well, hopefully you can explain to them or point them out to this article. Another option you can have is to say you have developed your software that is compliant to PCI requirements. The following list shows what it should take to address PCI compliance (not comprehensive):

1.      Requirement 2 – Ensure no clear text for administrative access

2.      Requirement 3 – Application is transmitting /store and strong encryption needed

3.      Requirement 4 – Application must encrypt when transmitting over public network

4.      Requirement 6 – Software development process – secure code review, remove test data before rolling to production,  ensure application is patched, prompt when bugs are discovered.

5.      Requirement 8 – ensure the application can support PCI DSS password requirements, password is encrypted at rest and transmission

6.      Requirement 10 – the application is capable of sending logs to the SIEM, Application penetration testing is conducted and documented what methodology of testing is used.

Requirements affecting Software: Sample Evidences
For all system components in scope (servers, network devices, applications, databases, etc.) and POS devices, provide evidence of strong cryptography being implemented (ssh, TLS 1.2 or later, RDP over TLS etc.)
Provide the following for all filesystems, databases and any backup media
– Details on method (encryption, hashing, truncation, tokenization) being used to protect covered information in storage
– Evidence (screenshots or settings) showing  covered information is protected
Provide evidence of encryption being used for transmission of in-scope data over any open or public communication channel (i.e. Internet, Wireless network, GSM, GPRS, VSAT technology etc.). Encryption must confirm to strong industry standards.
For the selected sample, provide evidence of,
– Current patch levels
– Patches being deployed in a timely manner
Provide secure software development process document in accordance with industry best practices
Provide a recent secure code review report for an application that stores, processes or transmits covered information.
Provide a document that outlines
– the process for generating test data to be used in lower (test/development) environments.
– the process for removing test data and test accounts prior to moving the system to higher (production) environment.
Provide 4 sample change request (2 for software modification and 2 for security patch implementation) from the last 6 months.
Provide the following from a secure code training perspective
– Material used for training
– Attendee list showing that all developers are covered
Provide evidence of logical access account and password features to include,
– Account lockout policy
– Account lockout duration
– Session timeout policy
– Password length
– Password complexity
– Password history
– Password expiry
Provide evidence that passwords (for platform and/or consumer applications) are encrypted during transmission and storage.
Provide the audit log policy settings.
Provide actual event logs for each of the platforms identified in the sample.
Provide a documented methodology being used for penetration testing.
Provide internal penetration test report.

You would get stuck if your clients want to see the PCI-DSS certification, which obviously you won’t have. In this case, the only way forward is to talk to them saying it’s not possible for you to be PCI certified in that sense. If you want, you could actually engage a third party auditor or even a QSA to assess the application based on PCI requirements. You won’t get a certificate for PCI, but at least you have a third party attestation or report, which hopefully should be enough.

Another option is to just get a hold of us at pcidss@pkfmalaysia.com and we can maybe provide a bit more persuasion to your client in accepting your application for PCI-DSS!

Alienvault USM Anywhere Updates

We just received very good updates from the Alienvault channel team (or AT&T Cybersecurity team as they call themselves now). I think to quickly summarise our excitement into two short phrases:

a) Google Cloud Support – Heck Yeah.

b) Custom Plugin Development – Heck Yeah!

Of course, there were tons of other updates as well, such as scheduled reports, unified UI, more AlienApps support, Cloudflare integration (which is very interesting, as we can identify actions to it, effectively making Alienvault function more like an active prevention system, as opposed to its traditional detective role), new search capability incorporating wildcard searches and advanced asset importing through CSVs as opposed to rudely scanning our clients network.

But the two main courses were the Google Native support and custom plugin.

Google Native support has been a pain point for years. We do have customers moving into GCP or already into GCP where we have been constantly battling to match their expectations for Alienvault to perform as seamlessly as it does on AWS – but it can’t. We had to rely on EDR (endpoint detection and response) for instance, where the agent grabs logs a’la HIDS and sends it over to the server directly. Of course, areas where a native sensor would function, such as creating an internal VPC filter mechanism, or doing vulnerability scanning without having too much inter VPC traffic – these were not able to be done with the EDR so it was very much a bandaid. We knew that our patched up GCP solution wasn’t functioning as well as its handsomer and more dashing brother, AWS. In other words, it kinda sucked.

GCP custom applications also presented its own set of issues – custom apps were difficult to integrate – even with Stackdriver, or us logging to BigQuery, presented a lot of issues to send these logs to Alienvault. When we could configure to send to BigQuery, we couldn’t filter properly, causing our 1TB per month customer quota to be annihilated within days. Now, getting PUB/SUB to work with Alienvault requires APIs to be written, and on top of that to have Alienvault write the custom plugins – all these add to pro services costs, and more importantly, resource and time cost to the project.

So what happens now? In the next General Acceptance/Availability of USM-A, GCP will be supported. The information is sparse so more updates will be forthcoming. But the GCP sensor will be able to:

a) Perform threat detection (like all other sensors), asset discovery, provide Alarms, events, widgets, correlation etc. Basically, it will be native to GCP, doing what it is doing for AWS, Azure and on-prem Hyper and VMWare.

b) Detect VPC flow logs

c) Monitor cloud services through Stackdriver

The last bit is very important. Stackdriver, in essence, is GCP’s answer to Cloudwatch and Cloudtrail of AWS. It monitors and manages services, containers, applications and infrastructure for the cloud. If you have a Cloud services or developing cloud applications, you should be able to support Stackdriver logging. In GCP Compute, the logging agent is used to stream logs from VM Instances. It can even provide the traditional network flow logs (or VPC flow logs), which MSPs can use to monitor network health etc. In other words, this ugly GCP little brother solution is going to get buffed. We’re going to look a lot better now.

The roadmap is bright: Automatic response action against a cloud service when a security event occurs – putting Alienvault into more of a proactive than detective stance it takes traditionally. This is similar to what the Cloudflare integration is achieving. More and more GCP services will be added to be supported. There is also a topic on “User Entity Behaviour Analytics” – which is basically matching behaviour to normal baselines and telling us that Bob is having coffee at 10 am instead of his usual 8 am, which meant he was running late to work, which meant he got stuck in traffic, which meant he left the house late, which meant he woke up late, which meant he slept late last night, which meant he went out for a drink with someone and got smashed, which could possibly mean he is having an affair with a stripper named Daisy. Maybe.

So, pretty exciting times, Aliens!

The other one on the plate wasn’t on the normal discussion agenda but was brought up by us on the international call – we just bombarded the screen with around 10 – 15 queries and at least 4 made it to the table. One of them was: when the hell are we going to get to do our own plugins?

No offence to Alienvault, who currently for USM-A are doing our client’s custom plugins – but 3 – 4 weeks isn’t really going to cut it. Furthermore, sometimes we are not even getting what we want from the custom plugins. We don’t blame Alienvault. The application is ours (as in our client’s). We are the ones who know the events, the priorities. We know what we want to see. We just can’t develop the plugins like what we do now for our USM Appliance clients.

Imagine the win-win situation here. We write plugins for clients (assuming its similar to Appliance), within 2 – 3 days we are done. Testing, another 1 – 2 days. Instead of setting the project timeline back 3 – 4 weeks we are 1 week in. That’s a HUGE impact for compliance clients who are often chasing a deadline. 3 weeks squashed to 1? Hell, Yeah! The win is also for Alienvault. They don’t have to deal with nagging customers or smart-ass channel partners like us banging them for not updating us on our new application plugin. Imagine the parties engineers can now attend to instead of writing regex for a company operating in Elbonia. Imagine the time they now can save and spend socialising with the rest of the world, or having the chance to meet people like Daisy.

It’s a whole new world, really.

So, Alienvault, please, get those updates to us as soon as you can and the world will be a better place for it.

If you need any information on Alienvault, or general help on your SIEM or PCI-DSS compliance, drop us an email on alienvault@pkfmalaysia.com and we will attend to it immediately!

Alienvault USM – Flat File Log Capture – Part 1

We’ve been working with and on Alienvault since the beginning of 2016 and a lot has changed since then. When we started out with Alienvault, they were a small-ish company still, with big ambitions, working with a very technical group out of Cork, Ireland. We had direct access to their technical team (I think even to one engineer) and the amount of knowledge we got from those early days are pretty much invaluable to where we are right now. Of course, Alienvault has changed a lot since then, and now being part of AT&T – for the record, we believe they have the right roadmap to go into cloud with their USM Anywhere concept, and their product right now is much more robust and enterprise ready. They are on the right trajectory.

However, back in the days, for Alienvault USM Appliance (not Anywhere), which is their Appliance offering, we could literally ‘jailbreak’ the system and go into the underlying OS and do stuff to Alienvault that we can’t do anymore in Anywhere. Some of the changes we made were to increase optimisation, put in our own scripts to clean up the system, troubleshoot the system and of course, create plugins for custom applications. We would write custom plugins in 1 – 2 days for multiple applications because of deadlines, I remember and had to do so much in so little time – but we did it anyway. We had to write a plugin for one of the oldest mainframes for a financial institution that was so difficult to interpret, we had to dig up old manuals to sort out the entries for log and events. It was like we were interpreting Egyptian hieroglyphs. But that’s what it took – 2 days, I think because of compliance requirements and customer breathing down our neck to get it done.

Writing plugins was the easier part of the battle – in some old machines or legacy applications, getting the logs was the problem. If Alienvault doesn’t get the logs, it can’t do anything with it. One solution was to leverage on the HIDs (Host IDS), or OSSEC as it was known, to grab log files from systems. It wasn’t so elegant, and we still had to end up writing plugins for it to normalise, but it resolve the issue where application was not able to forward logs to the SIEM, or not able to write the logs to the Windows Event service, or any other way to get logs out to a syslogger. So the solution here is for the application to just write the logs to a file, and Alienvault go ahead and grab this and interpret it. It may not be real time, but it works.

There’s a good write up over in Alienvault at
https://www.alienvault.com/documentation/usm-appliance/ids-configuration/process-reading-log-file-with-hids-agent-windows.htm. So a lot of it is just a repeat and probably an exposition on why we are doing certain things in a certain way.

So the first thing to do here is to ensure that you are able to install HIDs on the server. HIDs will be the key to get this file out to Alienvault. Technically, you could actually use NXLog as well but let’s explore that another time.

Once HIDs is installed, get into the ossec configuration file to define the <localfile> location. Now assuming that you have configured your application to write to a flat file called database.log.txt.

Go ahead and restart OSSEC. That’s pretty much what you need to do to start off so it’s pretty simple.

The rest of it is all done on Alienvault.

To summarise the steps:

Enable “logall” on your USM Appliance. You want to dump whatever you are getting in that flat file database log to a log inside your Alienvault so you can start doing stuff to it. In this case, in your AV User Interface:

Environment > Detection > HIDS > Config > Configuration.

Add <logall>yes</logall> to the <global> section of the file .

You are dumping these logs into /var/ossec/logs/archives/archives.log.

Restart the HIDs service through UI.

You should be able to see new logs coming into archives.log. Just do a tail -f on it, edit the log file (database.log.txt) in your remote system (just write something on it) and see if it appears in your archives.log. Once you see it, you are almost done. Very simple.

So for now, you have customised logs coming into your Alienvault. The next thing to do is to interpret these logs and make sure events are able to be derived from these logs to something that is useful to you!

Drop us an email at alienvault@pkfmalaysia.com for more information on Alienvault or any technical queries you have, and we will attend to it.

PCI-DSS Full Disk Encryption Part 2

In our previous article we wrote on how Bitlocker can possibly be used as a full disk encryption solution for PCI-DSS.

One of the key things is for the following statement to be complied to:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

By enabling TPM itself doesn’t guarantee that the native authentication is separated from the logical access to the encrypted file system.

The below basically enables TPM with PIN to ensure that there is an additional logical access that is required to comply to PCI-DSS.

So overall, this means that Bitlocker needs an extra authentication when the server restarts. As the policy states, either a passphrase or USB will be required for the startup, and from PCI perspective, this addresses the separate authentication requirement.

Of course the major discussion here is what is compliance and what is practical security?

Because when was the last time you actually restarted your server? The fact is that full disk encryption is only as useful as it is to protect data on the disk when the system is not running. When the server is running, access to the disk remains open and therefore, we see unprotected PANs with their pants dropped (so to speak).

We are not saying that bitlocker cannot comply to 3.4.1 of PCI. We are saying probably PCI might be better off relooking at this 3.4.1 and clarify the ‘spirit’ of this requirement. At the end, we are concerned with loss of PAN. We are concerned with the fact that files may be taken away, siphoned away through a variety of means – either through the network, or USB, or photos on your phone etc.

The problem with Full Disk Encryption is that even if we do have separate authentication to boot up into the server, once it’s booted and once authenticated separately, the full disk encryption no longer does the job of ‘rendering PANs unreadable where they are stored’. The argument thus comes about that once that occurs, then whoever is reading those PANs are authorised users already with business requirements to view those PANs.

In our opinion, there needs to be much more security surrounding these servers with PANs that use full disk encryption. Access must be limited again to only those with business justification, and not be used for multiple purposes and especially not for non-PCI usage. Logical access, hardening, logging and monitoring obviously needs to be in place. Protection of the PIN must be in place, and changes of PINs based on PCI-DSS expiry policies.

The comfort level of FDE vs say, file encryption or even folder encryption is much less. Whether it meets 3.4.1, if done properly, it clearly does. But is it truly secure? Therein lies that discrepancy between compliance and security. It ticks the checkbox (for now, unless PCI alters it in 4.0), but from a security standpoint, there is a lot of risk surrounding it.

If you use FDE, don’t expect your QSA to just take it lying down. Most likely further queries will be made and some may deem it even insufficient in itself to address the risks of PAN being compromised and may request additional controls on top of it.

If you have further queries on FDE or any compliance programs like PCI, ISO etc, drop us an email at avantedge@pkfmalaysia.com and we will attend to it immediately!

PCI-DSS Full Disk Encryption Part 1

In PCI-DSS, one of the most difficult requirement to get through would be Requirement 3, that deals with stored credit card information and how to protect it. Aside from Requirement 10: Logging and Requirement 6: Software, Requirement 3: Storage makes up a bulk of the remediation effort and cost of PCI-DSS.

The excerpt ominously states at the beginning: Protection methods such as encryption, truncation, masking, and hashing are critical components of cardholder data protection. If an intruder circumvents other security controls and gains access to encrypted data, without the proper cryptographic keys, the data is unreadable and unusable to that person. Other effective methods of protecting stored data should also be considered as potential risk mitigation opportunities. For example, methods for minimizing risk include not storing cardholder data unless absolutely necessary, truncating cardholder data if full PAN is not needed, and not sending unprotected PANs using end-user messaging technologies, such as e-mail and instant messaging.

It goes without saying that if you have credit card information on file for whatever reason, it would be a good time to relook at the necessity of it. If you don’t need it, get rid of it, because the cost of maintenance and remediation may not be worth whatever value you think you are obtaining from storage of card data.

If you do need it, well, PCI provides a few options for you to protect it: Encryption, Truncation, Masking and Hashing. In this series of articles we will be looking into encryption and more specifically Full Disk Encryption.

Encryption itself deserves a long drawn out discussion and the types of encryption – you have applications doing encryption through the application library, you have database encryption like TDE, you have file encryption or folder encryption, you have full disk encryption. One part is the encryption methodology. The other part of it is the encryption key management. The latter is the one that usually throws up a headache.

We will be exploring Full Disk Encryption or FDE, and where it can be implemented to comply to PCI-DSS.

There is a specific part in 3.4.1 stating:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

So aside from the encryption being strong encryption and key management being done properly, PCI says, there are a few more things to be aware of for full disk encryption:

a) Logical access must be separate and independent of the native OS authentication

b) Decryption key must not be associated with the user account.

What does this mean?

Let’s look at Bitlocker for now, since that’s everyone’s favourite example.

Bitlocker has gone through a lot of stick probably because it’s a native Microsoft offering. Maybe. I don’t know. The fact is Bitlocker is able to use 128 or 256 bit AES so basically, in terms of strong cryptography, it’s possible. It’s the key management that’s the issue.

For key management, the recommended usage with Bitlocker is to use the Trusted Platform Module version 1.2 or later. The TPM is a hardware in your server that somewhat acts like a key vault or key management module, to simplify it. It offers system verification to ensure there is no tampering of the system at startup. Beginning with Windows 10, version 1803, you can check TPM status in Windows Defender Security Center > Device Security > Security processor details. In previous versions of Windows, open the TPM MMC console (tpm.msc) and look under the Status heading.

Bitlocker can also be used without TPM, although that means the system integrity checks are bypassed. It can operate along with Active Directory, although the newer versions of bitlocker doesn’t store the password hash in AD anymore by default. Instead a recovery password can be stored in the AD if required.

With the TPM, it’s still not the end of it, because we need to make sure that there is a separation of authentication for bitlocker to operate. In this case we will look to configure it with a PIN (which essentially is a password that you know).

First of all, let’s see what at the end we should be seeing.

So at the end you are basically seeing both file systems being encrypted. I’ve been asked before if all volumes need to be encrypted, and the answer is no, because bitlocker can’t do that anyway. Your system drive can’t be encrypted. So for PCI, it makes sense NOT to store card data in drives that are not encrypted.

The next thing we need to check is to ensure your set up has fulfilled the strong encryption requirement of PCI-DSS:

So you have a few things to ensure that strong crypto is enabled and key protectors are in place. So what you have is bitlocker now enabled. You also basically need to ensure you properly document the key management policy – include in AES256 or 128 that you are using, which drives are protected, key expiry date.

Keep in mind also the following:

FVEK (Full Volume Encryption Key) as DEK and VMK (Volume Master Key) as KEK.

FVEK stores in Boot sector (Volume meta data) in hard disk and VMK stores in TPM chip PCR register (it’s a Hardware chip which place in Motherboard).

In general, the above would fulfill PCI requirements. In our next article, we will write out on how logical access to the encrypted file system can be separated from the native OS authentication mechanism.

Meantime, please drop us any enquiries at pcidss@pkfmalaysia.com if you need to know more about PCI-DSS or any compliance matters in IT. We are here to help!

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑