Page 22 of 40

PCI DSS and the Problem of Scoping

pci-compliance

I recall in an actual case a few years back when I received a call from a company requesting us to do a certification for PCI for them. So I met them and drew out their PCI plan starting with a gap assessment, remediation and certification audit.

They said they have already done their own gap assessments internally by their ISMS guys. And they will be doing all their remediation on their own and they just needed me to quote for certification audit because “PCI is forcing us to be certified by a third party, which we believe we can do it better than you can”.

There was nothing much to talk to them about, but I did mention that if we find major NC (non compliances, in ISMS speak), we would then use that ‘certification audit’ as our own gap assessment and that we might be required to come back again to verify.

The company truly believed that PCI was a subset of ISMS and they handled it as such.

So we came in for the certification and found out that their entire scope was completely messed up. For instance, there was another out of scope network and systems connecting into their CDE for monitoring. Because card data wasn’t passing through, they marked it as out of scope. Unfortunately, PCI doesn’t see it that way. This would be considered an Non CDE In Scope, and systems within this network will need to be secured as well, and hardened as per PCI. The logic is that if these systems are compromised, there is a path into the CDE that can be exploited.

They made a huge fuss on this, claiming that they are willing to absorb the risk and that their management signs off on the risk assessment.

ISMS is a best practice/guideline at best – it’s a great marker for security, but PCI is a standard. If you can’t meet it, then you don’t meet it. Of course, there are ways around this particular issue, but they insisted we passed them simply because their management accepted the risk.

Here’s another idea: PCI-DSS generally doesn’t really care about your business. It’s not about you. It’s about card data. Visa/Mastercard and the Jedi PCI council are not concerned about your business – they are concerned about the confidentiality and integrity of card data. That’s why you will not find any BCM or DRP requirement in PCI. RTO and RPO? Pfft. They don’t care. Your business can go down for 10 weeks but as long as card data is safe, it’s good.

And that’s why, scoping is HUGELY important. Many people might think that a gap assessment is a waste of time. It is, if it’s done incorrectly. I recently witnessed a ‘gap assessment’ report that was a complete mess. It just detailed the PCI twelve requirements and in each requirement gave an overview of the company’s controls and what they should be doing: ripped off almost verbatim from the actual standard itself. That can be downloaded for free.

A gap assessment needs to bring you from one place to another and needs to provide these:

a) A clear understanding of your scope, including a writeup on your network, and processes that have been assessed. It should also be clear what is out of scope. This initial scope usually is not set in stone as remediation would sometimes change what is in scope and what is not in scope. But at least you have something concrete to start with.

b) If possible, an asset register. For PCI. If this is not possible (for many reasons, e.g they have not purchase some assets required for a control), then the asset inventory needs to be prioritised a quickly as possible to see what is scoped and not. Asset should be clear on: Public ips, internal devices, servers, network devices, people involved, desktops, databases etc.

c) Network in scope and out of scope. This is key as companies are required to identify segments scoped out, and do segmentation testing. Also, CDE is clearly marked, NON-CDE IN SCOPE (we call it NCIS) must also be identified. Systems in NCIS could be monitoring system, SIEM, AD etc. Any system that connects to the CDE, but does not store, transmit or process credit card data are considered NCIS. NCIS must be scoped for testing, quarterly scans, hardening and such.

d) Clear roadmap for remediation and recommendations to proceed, specific to the organisation. These ‘gaps’ should all have a corresponding solution(s).

If the gap assessment doesn’t give you any of these, then it’s pretty useless. If it doesn’t move you forward or provide you with the information to move forward, it’s not a gap assessment. It’s an expensive training session.

So back to the first example of a customer. It wasn’t possible for us to certify them no matter how they argued, because simply they were not compliant (there were also many issues that they did not comply, for instance storage of card data in text files and sending via emails).

As a lesson – don’t neglect the proper scoping. It’s hard work, but as I always say: Start wrongly, do wrongly, finish wrongly. And that’s 6 – 8 months down the drain, with thousands of ringgit gone in investing, and job on the line. The second example is pertinent also. There is always a chance to OVERSCOPE as there is to UNDERscope.

An overscoping example would be to purchase all sort of snazzy security systems worth thousands of ringgit only to find that these were not needed, or that current controls were sufficient. It’s nice to have – but most of our customers, no matter how big they are, always have a trigger on the budget and cost optimisation is the topmost in their priority.

If you want us to help you in your PCI-DSS scoping, drop us a note at avantedge@pkfmalaysia.com and we can get you started with the initial understanding straight away!

Deployment of Alienvault in Practice Part 2

avlogo

So now you have a server instance of Alienvault in your network and you need to get your sensors up and running.

While a majority of small deployment can do with an All-In-One, there are reasons why you might need a separate server/sensor config. Remote sites for instance; where you want the sensor located onsite to perform log normalization, vulnerability assessments, availability etc. The sensor does quite a fair bit of work as well – and on top of that, it balances out the EPS. Remember, the AIO has a limit on EPS, so if you are looking at anything beyond 1,000 EPS, you are going to struggle to keep up with the events without a sensor.

Deploying a sensor is straightforward.

First, it’s important to understand a sensor does not have a GUI frontend, so all config is done on the Alienvault Setup Menu or CLI. This doesn’t make it any more difficult – in fact the hardest part of it is to include in the License Key in the menu – since we can’t cut and paste, so you need to make sure you do it correctly.

Second, you should always have a server instance before you go around setting up the sensor.

In the Alienvault Setup, go to Configure Sensor->Configure Alienvault Server IP. Now this should be your server IP. Some have asked should it be the management IP or the Logging IP. It should be the management IP, unless of course your management IP is not reachable, in that case, the only reachable IP is the logging IP of your server.

So go ahead and do the same for your framework IP address as well. Apply all changes and you are set.

Head back to the server, and go to the UI

Configuration->Deployment->Sensors

You will see the following message

Warning: The following sensors are being reported by as enabled by the server, but aren’t configured

Don’t worry about this, just click on Insert and you are done. It’s that straightforward. You will see the sensor listed, with the context it’s under, version and the status should have a checkbox next to t.

The final part is to get the Logger up and running.

Opposite from the sensor, the Logger is setup via the UI.

What’s important to understand here is that the flow is Sensor -> Server -> Logger.

So the logger is actually the end of the flow where all your logs are forensically stored and archived and validated. As far the server is concerned, it sees the Logger as a Parent.

ON THE LOGGER

Head over to the Logger UI (having already set it up as you did the server initially with IP Addresses, Licenses etc)

Go to Configuration->Deployment-> Servers and use “Add Server”

Again go ahead and use the IP address you have been using to define your server during your sensor config.

Once you have added the server and saved, head back to the Server screen and click on your logger instance (which should be there by default already)

Now select “NO” for everything except “LOG” in the form.

That’s it. You shouldn’t be type in the REMOTE USER and all that as this is done later in the Server.

ON THE SERVER

Now, back to the Server UI. Go to the same Configuration->Deployment->Servers.

It sometimes can get confusing here as the UI is the same, so make sure you name your Logger and Server appropriately!

On the server, you should see both the SERVER and LOGGER under the UI.

Modify the LOGGER (remember, you are on the SERVER UI, NOT THE LOGGER UI).

You won’t be able to change anything in there but you can set the Remote Admin and password to log into the Logger. Use the admin credentials (not the root) and let the URL populate itself by clicking on it.

Set “Remote Logger”

Finally, go back to the server screen and click on the SERVER -> Modify

You can now opt to set up Log to NO. Under that, in the Forward Servers option, click Add Server and go ahead add in your Logger.

Save and Apply all changes.

Click on Server Hierarchy and we have a nice primitive depiction of the Server pointing to the Logger. Well Done!

Now –  a note: If you are using an AIO UA as a server instance, you can set up the Log to YES in the AIO. That means you are logging in both locations.

In your logger, interface you will see that you have two different color boxes, depicting which Logger it is sent to.

If for some reason you want to say, OK, for asset 1 – 20 send to AIO, and for Asset 21 – 100, send to the Logger, you can disable the forwarding we set up above, and do it via policies. The great thing about Alienvault is that it allows that granular flexibility to control where your AIO wants to forward (or not forward) logs to.

We will explore Policy Setup in the future.

For now, enjoy your three piece band – Sensor, Server and Logger!

 

PCI-DSS v3.2 is officially published

pci-compliance

After some back and forth on the draft versions, PCI v3.2 is now officially published. You can go ahead and download it here, and click on the nice little link saying 3.2 and agree to all sorts of terms and agreements nobody ever reads about.

Anyway, a little bit of background on this release. Usually, versions for PCI are released in the later stages of the year in November. In fact, even I mentioned this to a few clients that version updates were done in November, until PCI recently announced that v3.2 is to be released in March/April timeline due to a few factors as described in this article. So yeah, now I need to admit I was bamboozled. PA-DSS v3.2 is likewise to be released sometime in May or June.

So here’s how it works: 3.2 is now officially effective. PCI v3.1 will be retired end of October 2016 (basically to allow everyone to sort of complete the v3.1 if you are already in the final stage of completing it). So all assessments/audit that occurs AFTER October will be version 3.2. This is important to note, because if any gap assessments begin now, and has a timeline to complete AFTER October, you want to use 3.2. For ongoing projects, it is best we scurry and get it all done before October! Chop chop!

There is a bunch of ‘best practices’ that will become requirements by February 2018. Other dates you need to be aware of:

a) June 30, 2016 – for companies not migrated yet out of SSL/early TLS, you will need to have a secure service offering (meaning an alternative service utilising TLS.1.1 and above. I will go out on a limb here and suggest to use TLS1.2 knowing how volatile PCI guys are in changing stuff).

b) June 30, 2018 – SSL/early TLS becomes extinct as far as PCI is concerned. No more mitigation plan! The exception is on POS terminals that has no known exploits.

c) January 31, 2018 – This is the deadline where new requirements graduate from being ‘best practices’ to ‘mandatory requirements’.

OK, now that’s out of the way, here’s a snapshot on the main stuff of v3.2 and what we are facing:

a) New Appendix A3 covers the Designated Entities Supplemental Validation. This basically means that if any acquirer or VISA/Master deems that a service provider needs to go through ADDITIONAL requirements on top of the torture they have endured for PCI, they can. These victims could include companies making ridiculous amount of transactions, aggregators or companies that are constantly breached. So PCI has a whole bunch of extra stuff for you to do, mainly to deal with BAU activities, incident response, documentation and logical access controls.

b) Additional cryptographic documentation – Service providers are not going to enjoy this. We will now need to formally document the protocols, key strength, cryptoperiod, key usage for each key and HSM inventory. This should technically be done anyway in your key management procedure document, but now its a requirement. Take a look at NIST SP800-57 for the key concepts to get you started.

c) 8.3 is significant : Multifactor login. Whereby previous versions stated that 2 factor authentication is required for remote access from non-secure networks, now 3.2 shifted this requirement to “all personnel with non-console administrative access, and all personnel with remote access to the CDE”. Wait, what? This means, even if you are accessing an administrative UI or page (non-console) from a secure environment, multi-factor (2 factor is good enough) is required! I think there would be some pushback on this as this requires a fair bit of effort. We have until February 2018 to implement this.

d) Another big one is 11.3.4.1 – segmentation PT now needs to be done every SIX months as opposed to a year. This is not good news for some clients who have segments popping up like acne on a pubescent face. That’s quite a lot of work for them to do and this might give them more cause to think of a completely isolated network just for PCI-DSS with its own link and architecture, as opposed to sharing with multiple not in scope segments. Again, we have some grace period till Feb 2018.

e) New requirement 12.11 is interesting. I have always been an advocate to do constant checks with clients to make sure they are at least practicing PCI. We have this free healthcheck service every quarter for clients who take up our other services and we are checking exactly this: daily log reviews, firewall is clean, new systems are documented and hardened, incidents are responded, proper approval for changes etc. It’s nice to see that our efforts now have something formal tied to them. Feb 2018 is the deadline.

f) Here’s a downer. Appendix A2. We all know there was some sort of escape loop for those who were caught with SSL and early TLS in their applications. They created mitigation documents which may or may not be true. Just saying. Now, if you take this route, this is no longer a free pass for your ASV scans or vulnerability scans. If you have these protocols in place, your mitigation plan must fully address A2.2 requirements. If you are a service provider, take note of A2.3: YOU MUST have a secure service option in place by June 30, 2016! Not 2018. 2018 is when you stop using SSL/early TLS. So this timeline is slightly confusing. Like X-Men:Days of Future Past confusing.

Some main clarifications include:

a) Secure code training now officially needs to be done annually – you won’t want to guess how much push back I get on this when I tell clients it’s annual, and not something that is done when they have the budget for it (which is never).

b) Removing the need to interview developers to ‘demonstrate’ their knowledge – I do programming a bit, but I’ll be foolish to think I can go up against a senior developer who eats, breathes and … lives for coding. How awkward I’ve seen some younger QSAs struggle to do this (determining whether the senior dev guru is good enough), when its obviously not something they even know about. Let auditors audit and let developers code.

c) Finally, note added to Req 8 to say that authentication requirements are not required for cardholder accounts, but only to administrative or operational/support/third party accounts. We have always practiced this anyway but now its clear.

d) More clarifications on addressing vulnerabilities considered ‘high’ or ‘critical’. I am not a big fan of these. I think every vulnerability should eventually be addressed, just prioritised in terms of timing. Even if it’s low or medium, it’s still important to have a mitigating factor to it. There is a reason why it’s a vulnerability and not something you can sweep under the carpet.

e) A good note on pentesting in 11.3.4c – testing now needs to be done by qualified internal or external resource with independence. Again, we already practice this but it’s good that now it’s official.

So, that’s about it. Of course, there’s a fair bit more. I suggest you to poke through the summary of changes first and then go through the documentation itself.

Be aware of those dates! It’s all over the place (June 2016, June 2018, Jan 2018), and who knows these might change in the future. Have a happy compliance.

 

 

 

Deployment of Alienvault in Practice Part 1

avlogo

In this article, we are going to explore deploying Alienvault in practice. While there are many documents out there that give pretty clear steps on what to do, these documents are somewhat pretty distributed, and we don’t want to come to a point where we are 85% into the deployment, only to find that we were supposed to do something 25% in and did not do it.

Before anything else, you should have a deployment checklist to make sure everything is in order. The checklist is pretty long, much too detailed to put into a post like this Email us at alienvault@pkfmalaysia.com, and we can get you started.

In this example, we will be using a 3 piece band: the Server, the sensor and the logger. You can generally just trade the server for an AIO, which we did, but in general, it’s going to serve as a server. Remember though, with an AIO, you do have an additional sensor if you want to enable it, or a logger as well, with around 4 TB of compressed space (vs 9TB of compressed space for a standalone logger).

With that out of the way, and assuming that physically everything is racked and connected, and the VMs are up and running, you are ready to go. Remember, if you have separate systems, always start with the server (or the AIO) first, and then only move on to the sensor. Else, your sensor might be orphaned.

Now, of course, if you are using virtual appliance, your VMWare needs to be set up. Some questions we encountered is, how many interfaces we should have. Well, you should have the management interface (and use that as log collection), and the other interfaces would be for monitoring. Now one of the trick questions here is that, hey, I want to have a separate management interface and log collection interface. So that you know, nobody knows my management interface.

Possible. But we have seen deployments where both the management interface and log collection interface sits on the same subnet. This is probably going to cause some issues – one of it is routing might likely be screwed up. Another thing is that deployment of HIDS might constantly refer back to the management interface. So, rule of the thumb:

If you only have one subnet, just use the one interface for management and log collection.

Another question we have is, by default, AIO comes with six interfaces. (because, remember, it’s also a sensor!). Some clients have it in their minds to use all six interfaces. Generally, aside from the management and log, all the other interfaces won’t be assigned an IP and will be monitoring interfaces (i.e put it in a SPAN port and monitor away). Now unless you have very specific reasons to, it would not be so likely to use all monitoring interfaces (depending on how you set it up), so don’t feel like you are losing out. A lot of the setups we see simply has the sensor or AIO located at a central switch with SPAN or TAP and monitors fine.

Another question: Thin or thick provisioning for disk format. Well – we are used to just setting it as thin, meaning that it will just grow as the logs increase, but if you have space, setting it to thick might still be fine. I am not a VMWare guru, and I am sure the VMWare gurus out there will go into battle with this one, but we’ve deployed on both disk format and it doesn’t seem to have an extreme impact at all. Of course, I stand to be corrected.

Yet another question (even before we go into deployment!) is if I buy a hardware with a hard drive of 200TB, can Alienvault use all the 200TB instead of the measly 1TB for AIO and 1.8TB for Logger? The short of the answer is no, the size of the virtual machine is in the OVF itself, so if you purchase a ridiculous amount of hard drive space, the alienvault image is still going to occupy what it is going to occupy. But hey, you could start hosting other virtual systems there of course and use them up!

Setting up the server

1) Ok, finally, let’s get down to it. Once you boot up and assuming you have installed the OVF correctly if you are running virtual appliance, you will be dropped into the setup menu. Select Manual network interface and define an IP. I would suggest this as opposed to depending on a DHCP server. Aside from that, other setup paramaters are what you should expect and should be able to fill up pretty easily.

Now one of the annoying things that sometimes we face is that when the initial setup is rebooted, we get stuck at that Alienvault face that keeps loading but nothing happens. To be safe, when you reboot, just keep pressing ESC till you see the booting details. If you are still stuck, alt+F2 might be able to escape you. Else, you might need to give it the good old Vulcan Nerve Pinch. (Ctrl-Alt-Del).

Other times, you might just be stuck at VMWare console and the annoying “Waiting for connection” that seems to hang. Your system is fine, it’s just the VMWare console is moody. Restarting your Vsphere might do the trick.

Once you can SSH into your box you are confronted with a login screen and once logged in, you need to change the root password. Don’t forget it!

After that, register your appliance. Now, if you are running on AIO/server/logger, I would suggest to do an online Web UI registration. Obviously you will need connectivity to the internet. You can copy and paste your product license key once you access the Web UI as there will be an option for you in the Free Trial Screen. After that, you can set up the admin user and password. There is an offline technique as well, or if you are in the mood to type the entire license, you can do so from the alienvault menu itself.

After this is done, set up the hostname. You need to do this from the alienvault setup menu, select System Preferences -> Configure Hostname.

Make sure you apply all changes. Once you apply all changes, go ahead and reboot the appliance from the menu itself.

Another important thing is to change the time zone. After reboot, head over to

System Preferences -> Change Location -> Date and Time -> Configure Time Zone. Select the place you are at and apply all changes.

Likewise, you might want to use an NTP (network time protocol) server as well. In the same Data and Time menu, select Configure NTP Server. Enable it by selecting it and put in the NTP hostname (if you have DNS defined) or IP. Apply everything.

Now, this might be a good time to check on the linux box if your time is correct.

Jail Break your system, and type in ‘date’, you should see it changed.

Likewise go to WebUI, login and click on Settings at the top right. Make sure the time zone for that user is properly defined. Now check back on the SIEM (Analysis -> SIEM) on the WebUI , you should see the Date as whatever timezone you have defined yourself in.

Timestamping is obviously a big deal in any SIEM, and other than these areas to be wary off, we should also know that individual plugins also have timezone options. This is helpful if the data source suddenly changes timezones and we have to accomodate the data source.

It looks like the server is all set. If you have an AIO, you should also now see under

Configuration -> Deployment -> Sensors / Servers , your IP address because you are a Sensor and a Server.

Next, we will look at setting up the sensor and logger.

 

Advisory on Badlock Vulnerability

badlock

This is a security advisory on the Badlock Bug.

What is Badlock?

Samba is an important component to seamlessly integrate Linux/Unix Servers and Desktops into Active Directory environments. It can function both as a domain controller and as a regular domain member. On April 12th, 2016 Badlock, a crucial security bug in Windows and Samba was disclosed. The security vulnerabilities can be mostly categorized as man-in-the-middle or denial of service attacks.

Man-in-the-middle (MITM) attacks:

There are several MITM attacks that can be performed against a variety of protocols used by Samba. These would permit execution of arbitrary Samba network calls using the context of the intercepted user. Impact examples of intercepting administrator network traffic:

  • Samba AD server – view or modify secrets within an AD database, including user password hashes, or shutdown critical services.
  • Standard Samba server – modify user permissions on files or directories.

Denial-of-Service (DoS) attacks:

Samba services are vulnerable to a denial of service from an attacker with remote network connectivity to the Samba service. Microsoft has addressed this in MS16-047. This vulnerability can be used to login as another user for applications that use the SAMR or LSAD protocol. All versions of Windows are affected.

Who is Vulnerable?

Samba Application running on Linux/Unix Systems

  • 3.6.x,
  • 4.0.x,
  • 4.1.x,
  • 4.2.0-4.2.9,
  • 4.3.0-4.3.6,
  • 4.4.0

Windows

All supported editions of Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8.1, Windows Server 2012, Windows Server 2012 R2, Windows RT 8.1, and Windows 10.

Associated CVEs

Badlock for Samba is referenced by CVE-2016-2118 (SAMR and LSA man in the middle attacks possible) and for Windows by CVE-2016-0128 / MS16-047 (Windows SAM and LSAD Downgrade Vulnerability).

There are additional CVEs related to Badlock. Those are:

  • CVE-2015-5370 (Multiple errors in DCE-RPC code)
  • CVE-2016-2110 (Man in the middle attacks possible with NTLMSSP)
  • CVE-2016-2111 (NETLOGON Spoofing Vulnerability)
  • CVE-2016-2112 (LDAP client and server don’t enforce integrity)
  • CVE-2016-2113 (Missing TLS certificate validation)
  • CVE-2016-2114 (“server signing = mandatory” not enforced)
  • CVE-2016-2115 (SMB IPC traffic is not integrity protected)

How to check if server is vulnerable?

A server is vulnerable to BADLOCK if:

  • It is running any of the above mentioned versions of SAMBA
  • For vulnerable Windows versions refer the following link:

https://technet.microsoft.com/library/security/MS16-047

How to fix

For Samba service running on Linux/Unix systems, apply the patches provided by the Samba Team and SerNet for EnterpriseSAMBA / SAMBA+ immediately.

Patched versions are (both the interim and final security release have the patches):

  • 4.2.10 / 4.2.11,
  • 4.3.7 / 4.3.8,
  • 4.4.1 / 4.4.2.

For Windows Installations, refer following link for patch details:

https://technet.microsoft.com/library/security/MS16-047

 

References and Useful Links

http://badlock.org/

https://www.samba.org/samba/latest_news.html#4.4.2

https://www.samba.org/samba/security/CVE-2016-2118.html

https://technet.microsoft.com/library/security/MS16-047

For more information or a vulnerability scan, please contact us at avantedge@pkfmalaysia.com.

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑