Category: Technology (Page 7 of 11)

Deploy HIDS agent in a Checkpoint Environment

avlogo

In our years of deploying and implementing SIEM, if we get 10 ringgit everytime a customer says: “Nothing wrong on our side, your SIEM is the problem” and then after hours of troubleshooting, the customer responds: “Ooops sorry, seems like our problem, we blocked your traffic” – we would all be as rich as Jack Ma by now.

The issue here is not so much that SIEMs are notoriously difficult to deploy – they are not, but its more that other devices need to talk to it. Deploying SIEM is easy, the finetuning is the problem. Because products need to get logs to the SIEM – either through the good old Syslog via UDP/TCP 514 or through agents installed on the systems, or through third party applications like Snare and NXLog – both for the naughty Windows machine where no syslog facility is found for some strange, Bill Gatesy reason.

It’s also because most Administrators guard their systems like pitbulls and any request for them to send logs over is generally greeted by a dumbfounded “Why are you so stupid” look, or “Over my dead body” look, or simply you get ignored completely. Oh, and also the “I want to go home at 5 pm today, so don’t mess with my jam” look. Whatever. The war between Administrators and Implementers has been going on longer than the blood feud between Lycans and Vampires.

We have been through deployments where countless of hours were spent just trying to convince our customer: “We are NOT getting anything on the interface. No packets. It’s being blocked.” And customer: “No, there are no firewalls between.” Five hours later, customer: “Oh yeah, there is a small firewall between and it’s blocking. Cheers”. On our SIEM, it’s easy. If our interface doesn’t see it, it ain’t there. That’s it. Do a tcpdump and grep the IP or port and boom, we know it’s not our problem.

The issue here is that “Not our problem” generally gets translated to “It’s now your problem” and we end up troubleshooting for our clients how to fix THEIR systems and devices. I’ve called my pals at Fortigate, Bluecoat, Cisco, Juniper etc so many times, asking them about issues that my client should be doing it themselves.

So anyway – we had this issue whereby we deployed a fair bit of HIDS (Host IDS, aka OSSEC) agents in a fairly large environment. It basically traverses through a firewall (Checkpoint). That Checkpoint firewall dropped UDP 1514 connection between agent and our AlienVault server. Port 1514 is the port that our HIDS uses to communicate between agent and server.

Firstly, establish whether we are getting those traffic or not in the interface. If we are not, then the problem could be on their end. When we do a tcpdump for udp 1514 on that specific host, we are able to observe some traffic from the server and vice versa. In our case even with that, the connection cannot be established. Bascially, our agent is able to reach the server but the server tries to respond but the traffic disappears. Therefore, the agent is orphaned.

For this case, after troubleshoot and checking, we found out that Checkpoint is dropping the UDP 1514 traffic responses from the alienvault server. So the communication between the HIDS agent and the server is never established. The log shows that UDP traffic is dropped with the following message:

Message_Info: Violated unidirectional connection

Great. UDP traffic is a stateless protocol. According to Checkpoint, by default, a reply to a UDP packet is not allowed. The Security Gateway can mark a connection in the Connections Table to allow traffic to pass only in one direction (hence the term ‘unidirectional’). If a UDP connection uses a bi-directional communication method, this would create a violation.

Therefore, the workaround to this is to allow Checkpoint to respond to this bi-directional communication. It’s pretty straightforward for Checkpoint actually.

You will need to add or edit a service object in Checkpoint. Again, we are Alienvault implementers but we end up mucking through Checkpoint firewalls just to get our job done!

So on a checkpoint edge box, it’s basically

Main Menu->Network->Network Services tab

You can now edit or add.

Go ahead and add a new UDP service, fill in the requirements, and then you will have an option for advanced, click OK.

In the Advanced UDP service property, ensure “Accept Replies” is clicked.

Now go ahead and use this service under the new or existing policy rule that has been set up for Agent->Server communication for HIDS.

Ta-da, done!

If you have any questions on Alienvault USM deployment, drop us a note at alienvault@pkfmalaysia.com.

The SAQ Bs and how they apply to you

pci-compliance

We always say SAQ As and Ds get all the glory and attention.

This is because a majority of our SAQ clients are e-commerce companies and therefore they apply SAQ A or A-EP depending on where their credit card information is collected.

However, recent times, we have been working on a well-known retailer and were told that SAQ D would be the one that applies to us. Now the SAQ was passed to them by the bank and the bank insisted that they do SAQ D-Mer.

Now this post is going to assume that you have some working knowledge on what SAQ’s are in PCI-DSS world. Self Assessment Questionnaires are one of the most misunderstood concepts in PCI. They are like Donald Trump’s foreign policy and the plot of Interstellar all mashed into one misunderstood mess. Often because acquirers find it so hard to understand, they just tell all their merchants that they should go for SAQ D.

Now we have fought for our clients before – where we overturned the acquirer insistence for one of our e-commerce clients to do an SAQ D-Mer, and instead got them to agree that an SAQ A-EP is sufficient. SAQ A-EP = around 140 questions. SAQ D-Mer = around 320 questions. Big difference.

Why is this important? We firmly believed in the concept of overdoing PCI is not a good thing. Why? Because our clients have other things to do and limited time and money to do these. Ideally, sure, everyone should go on Level 1 Certification. But the reason why the PCI Council created a whole bunch of ‘levels’ and then types of SAQs is simply because different businesses face different risks. It doesn’t make sense for a neighbourhood grocery that accepts 10 cards a month to implement the same million dollar controls as, say, Tesco or Exxon Mobil. So. Don’t overdo things, but don’t under-do it as well.

Back to SAQ Bs. So with this client, after talking to them a few rounds we found out that:

a) Their credit card terminals are separate and not integrated with their POS machines and connected via USB.

b) The POS machines are all connected back to the branch switch (let’s call it branch switch) and from there connects back to corporate HQ for reconciliation purposes

c) However – we found out that the Credit Card terminals have their own connectivity to their own ethernet switch (lets call it Credit Card switch) that connects to an ISDN router and directly to the bank.

This means, there are two flows – once the credit card is used on the Credit Card terminal, the card information is sent out directly via ISDN to the bank. Whatever approval etc that comes back, it will go through the USB to update the POS.

The crunch here is that NO CREDIT CARD information is ever sent back to the client’s environment. Everything is out through the bank environment – as the Credit Card Switch all belongs to the bank. It’s only located on the customer premise but the customer has no access to it – physical or logical.

So begin our argument with the acquirer, to overturn their SAQ D to SAQ B or B-IP. Let’s look at SAQ B criteria as per PCI document:

a) Your company uses only an imprint machine and/or uses only standalone, dial-out terminals (connected via a phone line to your processor) to take your customers’ payment card information;

Seems like it. Technically, the question here is whether a credit card terminal connected to a POS machine with USB is considered ‘standalone’. Our argument here is yes, as long as no credit card info flows through that USB connection and only approval/decline/transaction dollar amounts etc. Remember the USB connection connects the terminal to to the POS machine (a Windows box). Credit card info flows out the other way, directly to the bank via a circuit switched technology like ISDN (i.e dial out). For the millenials, ISDN used to be the granddaddy of broadband. If you have ever gone through internet connectivity era with normal dial up 14.4kbps, ISDN is like what God would send to us out of mercy and grace.

b) The standalone, dial-out terminals are not connected to any other systems within your environment;

Again, the argument here is ‘connected’. What does this mean? Is it through IP means, or even an RS232 connectivity is considered connected? Our reasoning is that this is USB connection and no card data flows through this ‘connection’ and we will use this reasoning once we get on the table with the acquirer.

c) The standalone, dial-out terminals are not connected to the Internet;

No they aren’t. They are on ISDN direct to the acquirer.

d) Your company does not transmit cardholder data over a network (either an internal network or the Internet);

No they don’t. In fact, no credit card info is stored, processed or transmitted anywhere in the customer environment. Except for the physical protection over the bank equipments residing on customer premise.

e) Any cardholder data your company retains is on paper (for example, printed reports or receipts), and these documents are not received electronically; and

They do have some credit card info on paper which they need to protect, but these are manual forms they need to fill out for refund process. And the process is dictated by the bank.

f) Your company does not store cardholder data in electronic format.

No, of course not.

So you see, except for the tiny word ‘connected’ in question b), our client does meet all the SAQ B criteria. It’s really ridiculous to have someone go through the entire SAQ D when they do not have card holder data in the environment they control. And what if they have 80 branches, each with 10 POS terminals and servers? That would mean 800 systems in all branches come into scope for pentest, internal scans etc? No wonder I hear some retailers using PCI as a cuss word these days.

So, we don’t know how this is sorted out yet, but we will soon, and perhaps that will constitute another post. For now, if you need any help with your PCI-DSS – SAQ or Level 1 certification, drop us an email at pcidss@pkfmalaysia.com.

Cheers for now!

PCI-DSS Logging in MySQL Community Version with MariaDB Plugin

pci-compliance

PCI-DSS is a standard that brings to mind the famous sayings of Jimmy Dugan, the coach of an all-girls baseball team in the movie A League of their own (Played by Tom Hanks):

“It’s supposed to be hard. If it wasn’t hard, everyone would do it. The hard… is what makes it great.”

Well, at least the first part. Whether the banter of it making it ‘great’ is a different story. Most PCI-DSS sufferers will add the word ‘pain’ after the word ‘great’. And, one of the main pains for PCI-DSS is logging and monitoring. That’s requirement 10 for you. So much so that PCI-DSS recently released a document specifically addressing this issue here. So you will be faced with myriads of issues – from the simple to the hard: no we cannot centralise log anything, we do not have logging function in our application, we do not know how to do daily monitoring of our logs, we do not know what to log or how to log, we are all running on DEC VAX from 1974. So many reasons.

One of the challenge we recently faced with the client was that they were using MySQL community version. The challenge was how they can log administrator actions and security INSERTS, UPDATES etc in mysql community version? Logging is totally available in Enterprise, but not the free one – or at least not in its limited form.

Enter Maria-DB Plugin. Now before we go into semantics, MariaDB is an opensource database created by guys who created MySQL. It’s a fork, because MySQL was acquired by Oracle some time back and everyone was afraid that Larry Ellison might gobble MySQL up the way Galactus ate planets. The cutest story here is that MySQL was named after the founder’s daughter – My. And yes, MariaDB is named after his other daughter! But the first daughter’s name is “My”…so it’s like, “Yeah, this is My, My Daughter.”

Anyway. So what we are talking about here is not for them to install MariaDB, but to use it’s ‘plugin’ for MySQL. Make sure the QSA doesn’t get confused on this because ours did and we entered into the twilight zone of communications for a while where nothing made sense.

The Advantages of using MARIA DB AUDIT PLUGINS are:

So this article, we are going to explain on how we install the plugins in MySQL version 5.6.35 that is based on CentOS 7.

  1. Download the latest plugin from the links given above and you should see the download directory as below. Choose the latest version. We used server_audit-1.4.0.tar.gz. in centOS. We can use the wget command that is:
    wget https://downloads.mariadb.com/Audit-Plugin/MariaDB-Audit-Plugin/server_audit-1.4.0.tar.gz
  2. Extract the tar file by using the command
    tar -xvzf <file name>
  3.  Login into MySQL and locate the Plugin Directory of MY SQL using the command below
    SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
  4. Copy the plugin to plugin directory in MySQL based on your linux server (64 bit/32 bit).
    • cp server_audit-1.4.0/linux-x86-64/server_audit.so /usr/lib64/mysql/plugin/
    • chown -R mysql.mysql /usr/lib64/mysql/plugin/server_audit.so

     

  5. Install the MariaDB Audit Plugin into the MySQL Server by this command inside MySQL
    • INSTALL PLUGIN ‘plugin name’ SONAME ‘filename.so’;
  6. Once Installation is complete, we’ll start the daemon with the following command in the command line:
    sudo systemctl start mariadb
  7. The command systemctl doesn’t display the outcome of all service management commands, so to be sure we succeed, we’ll use the following command:
    sudo systemctl status mariadb

    If MariaDB has successfully started, the output should contain “Active: active (running)”

  8. Next, let's take a moment to ensure that MariaDB starts at boot, using the systemctl enable command, which will create the necessary symlinks: sudo systemctl enable mariadb
  9. Next, we’ll turn our attention in configuring the syslog FormatSet the Type of Action that will be log (within MySQL)
  • Connect: connecting and disconnecting to/from the server will be added to the log. An unsuccessful connect will be logged as a failed connect including the error code.
  • Query: full statement including the values will be logged
  • Table: Any operation on a table triggered by query will result in an event the MariaDB Audit Plugin can catch to log it directly
SET GLOBAL server_audit_events='CONNECT, QUERY,TABLE';

You need to have root privilege to be able to change the Audit Plugin variables.  With this changed we are ready to enable the auditing, which we now will do by using the following command within MySQL:

SET GLOBAL server_audit_logging=ON;

The full set of variables is found on this page: https://mariadb.com/kb/en/mariadb/server_audit-system-variables/

To make the changes to the configuration of the MariaDB Audit Plugin permanent, we now need to add these settings to my.cnf. This ensures that the same configuration will be used after server restart.

Under [mysqld] in my.cnf, add in

server_audit_events=CONNECT, QUERY, TABLE
server_audit_logging=On

There you go, now your MySQL is ready to face the scrutiny of the QSAs in your PCI-DSS compliance program!

Email us at avantedge@pkfmalaysia.com for any enquiries regarding this plugin or PCI-DSS in general and we will get back to you as soon as we can.

Alienvault Deployment Checklist

avlogo

A while ago we were asked to share an Alienvault Deployment checklist. So here it is (by no means comprehensive but just to give you an idea of what you need to have)

a) All data sources listed and PIC (person in charge ready)

You have no idea how much time is actually wasted getting logs into Alienvault. As I said, as an AV implementer, we have no obligations to sort out why or how to configure data sources to send logs over to our system.

From Alienvault side, easiest way to determine if they are actually sending us stuff:

#tcpdump -Xni eth0 port 514

Assuming eth0 is your logging interface and you are sending using rsyslog. If OSSEC, then it would be 1514. If you don’t get anything, then stuff is not being sent to you.

Invariably we always end up troubleshooting for our client and its always one of these:

– routing issue

– network firewall issue

– host IDS/host firewall problem

b) VM should be set up properly

Sometimes our client expects us to troubleshoot their VMWare setup and install ESXi for them as well. This is not part of our scope of works, but again, when we go onsite, we find out that their VM environment is no where ready and we have to prep it up for them. Usually its either under resourced or not even created. Sometimes we find that they are running on a host machine that belongs in the museum rather than a DC and they insist that its good enough. Umm. 500GB hard disk and 4GB RAM? Come on.

c) IP Addresses allocation

Remember, you need to allocate the appropriate IP addresses

– Host IP address (for the VM host if you are starting scratch)

– Logging interface IP – this is where data sources send logs to

– Management Interface IP – lots of people gets confused with this and thinks this is required. You can use the logging interface as management interface. Unless you have another interface sitting in another management network, I would suggest to use the same interface as management. The problem with setting up two interfaces on the same network is that routing might get screwed up at times.

– Network Monitoring interface IP – you don’t need one. However you need to assign an interface to monitoring if you want to use IDS. By default the management interface is assigned.

d) Installation Key

If you are ready to install, make sure you have the installation key. For setup on CLI you won’t be able to copy and paste so you need to type the whole thing in. True story, we have gone to onsite before and set everything up and when it was time to enter the license key and we looked to the client, they went like “What license key?”

e) All equipment accessible

You would think this is a no-brainer, but you have no idea how many times we tried to install and client tells us, oh, we don’t have any internet access on this box yet. Doh!

f) Technical Checklist

– Before deployment, we should have the inventory list or BOM – how many servers, loggers, sensors and their locations. If its a small setup, then AIOs will do.

– List of network segments to be monitored – ensure span ports are set up and confirm how these are connected to sensors

– Full list of assets to be monitored by Alienvault. We need this and need to be careful. If there are a whole bunch of gateway proxies in that list and you have an AIO running, keep in mind that AIO processes up to 1000 EPS and I have seen proxies bunch together and get a lot more than 1000 EPS.

– Diagram of how AV is setup and distributed across networks. If they are all in the same network, then well and good. Are they traversing the internet to communicate, is VPN required between components?

– HIDS – how many, and how many on Linux and Windows?

– Vulnerability scans – are you running authenticated scans and if so do you have the correct credentials?

– Netflow – how many netflow sources are going to be integrated if you are using netflow?

– File Integrity Monitoring – do you have list of servers and directories/files to monitor?

– Baseline – do you have an idea of what baseline securities are there. For instance, your baseline for normal traffic could be “Access to critical server A is from 7 am to 7 pm Monday to Friday”. Therefore any access out of these hours are abnormal and considered an incident.

In summary, setting up a SIEM (at least for now), isn’t just plug and play and it does all the machine learning it needs to do. Alienvault has threat intelligence to assist in identifying attacks – but for fine tuning and filtering of logs, you would still need to work on it pretty much. But with a checklist in mind, hopefully you start the deployment on the right note.

 

The Long Road of PCI Recertification

pci-compliance

We have been in PCI-DSS for six years.

When we began back in 2010, we were tasked by one of our offshore customers in Brunei to get them “PCI” certified. Honestly, back then, early 2010, we were mainly doing IT audits under COBIT, a lot of penetration testing, some IT forensics and bogged down with piles of ISO27001 ISMS opportunities.Back then PCI was more known as Peripheral Card Interconnect, which are those add-on cards that you slot into your motherboard back in the days when you wanted to extend your network interfaces, graphics accelerators etc. I used to build computers in those dodgy computer shops back in the days, so I kind know that very well.

Fast forward six years, and now we are getting more and more queries for PCI-DSS. So much so that we have dedicated an entire team from our company to work only on PCI-DSS projects.

In earlier years, we brought our PCI clients through their first year certification, and many of them are now going through their 2nd, 3rd year recertification etc.You would think that most companies will find re-certification easy compared to the first time certification.

Don’t be fooled.

The thing about PCI is, during the re-certification, there is a lot more expectations on your organisation for compliance. An example – PCI requires logs to be retained 3 months online, 12 months offline. It also requires daily log reviews, as well as quarterly internal and external vulnerability scans.

Now for the first time certification, some of these requirements get a free pass: meaning, if our client had just installed a SIEM and only has 2 – 3 months logging set up, we verify those controls and based on those controls, we can pass their PCI. We don’t need to wait for 12 months to get the offline requirement passed. Likewise, if our client provides us with one internal and external scan, we can pass them for first time, we do not need a 4 quarterly scan before we sign off on the initial AoC.

However – once the re-certification arrives, these become MUSTs. Some of our clients want to undertake internal scans themselves and missed one quarter and expects us to still pass them. Or they have a SIEM, but no action done on daily reviews, or their SIEM was not set up properly and no logs were sent there. They get upset when we say we can’t pass them on those basis because their response was “We did this last year!”

Also, evidences.

Whenever we conduct our audits, we conduct it onsite. Onsite, the QSA will verify these controls if they are in place or not. On top of that, we require audit evidences. This is normal even out of PCI – in our governance audit or ITGC we often rely on audit artefacts (we call it), to supplement our opinion on whether certain controls are in place. In PCI, these evidences might come in forms of documents, policies, screenshots, configs etc – anything that can prove controls are in place, and effective, and accordingly used as per PCI requirements.

The onsite audit confirms these controls. The evidences supplement the QA process. Each QSA needs to go through a stringent QA (quality assurance) process internally, whereby, the QA requires supplementary evidences to prove why the QSA arrives to such and such an opinion. Therefore,  there is always that post-audit work of compiling audit evidences.

Some clients are of the opinion that the onsite audit should end the process and the auditor passes PCI then and there. Unfortunately it’s not so simple as there is a check and balance involved. An example is this: one of our clients recently added in a few out of scope devices into the CDE. During the onsite audit, we referred these and requested these systems removed or resettled in another segment. They said, OK, we will put it in another VLAN. So, if they do that, is that ok? We said OK.

Fast forward to the post-audit work, we asked “Hey, have you done your VLAN yet?”

“Yes, we have. Can you pass us?”

“OK, can you give a screenshot of the new configuration in your firewall or switch to prove that you have done this action?”

“WHAT??! Why??!”

You see – as auditors, we simply cannot trust you for your word. It’s not personal. It’s not that because we find you are a shifty trader looking to spin some yarn and fleece us of our money. It’s simply because it’s part of our job. Evidences provide us with some measure of assurance that these controls are done correctly and in place. It’s not that we question your integrity. It’s strange that even at this stage, many people find this difficult to accept, and we have gone through many, many strange situations whereby I have faced a red-faced, yelling executive thinking that I am personally insulting him and his family name by not trusting what he is saying.

Audit evidences. It’s part of PCI.

There are of course some exceptions, such as certain private and confidential documents or config that cannot be shared – even in that case, we generally ask these information to be anonymized, but evidences to be submitted all the same: for instance, evidence of VLAN config, you can screenshot the config, and remove elements deemed sensitive (IP Address, versions, other information etc).

In summary – the second year onwards, this is where the real PCI battles begin. Your recertification efforts will be a whole lot more than the first time, so get started early. We will be posting more articles on tips and actions that will make your PCI certification successful.

In the meantime, drop us a note at pcidss@pkfmalaysia.com and we will attend to any queries you have.

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑